WorldWideScience

Sample records for modeling technique sadmt

  1. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  2. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  3. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  4. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  5. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  6. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  7. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  8. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  9. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  10. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  11. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  12. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  13. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  14. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  15. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  16. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  17. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  18. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  19. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  20. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  1. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  2. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  3. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  4. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  5. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  6. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  7. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  8. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  9. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  10. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  11. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  12. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  13. Application of nonliner reduction techniques in chemical process modeling: a review

    International Nuclear Information System (INIS)

    Muhaimin, Z; Aziz, N.; Abd Shukor, S.R.

    2006-01-01

    Model reduction techniques have been used widely in engineering fields for electrical, mechanical as well as chemical engineering. The basic idea of reduction technique is to replace the original system by an approximating system with much smaller state-space dimension. A reduced order model is more beneficial to process and industrial field in terms of control purposes. This paper is to provide a review on application of nonlinear reduction techniques in chemical processes. The advantages and disadvantages of each technique reviewed are also highlighted

  14. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  15. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  16. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  17. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  18. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  19. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  20. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  1. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  2. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  3. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  4. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  5. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  6. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  7. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  8. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  9. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  10. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  11. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  12. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  13. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  14. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  15. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  16. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  17. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  18. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  19. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  20. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  1. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  2. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  3. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  4. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  5. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  6. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  7. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  8. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  9. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Science.gov (United States)

    Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  10. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Energy Technology Data Exchange (ETDEWEB)

    Amicarelli, A; Pelliccioni, A [ISPESL - Dipartimento Insediamenti Produttivi e Interazione con l' Ambiente, Via Fontana Candida, 1 00040 Monteporzio Catone (RM) Italy (Italy); Finardi, S; Silibello, C [ARIANET, via Gilino 9, 20128 Milano (Italy); Gariazzo, C

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM{sub 10} concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  11. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    International Nuclear Information System (INIS)

    Amicarelli, A; Pelliccioni, A; Finardi, S; Silibello, C; Gariazzo, C

    2008-01-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM 10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode

  12. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  13. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    Science.gov (United States)

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  14. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  15. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  16. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  17. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  18. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  19. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  20. Dynamic model reduction: An overview of available techniques with application to power systems

    Directory of Open Access Journals (Sweden)

    Đukić Savo D.

    2012-01-01

    Full Text Available This paper summarises the model reduction techniques used for the reduction of large-scale linear and nonlinear dynamic models, described by the differential and algebraic equations that are commonly used in control theory. The groups of methods discussed in this paper for reduction of the linear dynamic model are based on singular perturbation analysis, modal analysis, singular value decomposition, moment matching and methods based on a combination of singular value decomposition and moment matching. Among the nonlinear dynamic model reduction methods, proper orthogonal decomposition, the trajectory piecewise linear method, balancing-based methods, reduction by optimising system matrices and projection from a linearised model, are described. Part of the paper is devoted to the techniques commonly used for reduction (equivalencing of large-scale power systems, which are based on coherency, synchrony, singular perturbation analysis, modal analysis and identification. Two (most interesting of the described techniques are applied to the reduction of the commonly used New England 10-generator, 39-bus test power system.

  1. Sabots, Obturator and Gas-In-Launch Tube Techniques for Heat Flux Models in Ballistic Ranges

    Science.gov (United States)

    Bogdanoff, David W.; Wilder, Michael C.

    2013-01-01

    For thermal protection system (heat shield) design for space vehicle entry into earth and other planetary atmospheres, it is essential to know the augmentation of the heat flux due to vehicle surface roughness. At the NASA Ames Hypervelocity Free Flight Aerodynamic Facility (HFFAF) ballistic range, a campaign of heat flux studies on rough models, using infrared camera techniques, has been initiated. Several phenomena can interfere with obtaining good heat flux data when using this measuring technique. These include leakage of the hot drive gas in the gun barrel through joints in the sabot (model carrier) to create spurious thermal imprints on the model forebody, deposition of sabot material on the model forebody, thereby changing the thermal properties of the model surface and unknown in-barrel heating of the model. This report presents developments in launch techniques to greatly reduce or eliminate these problems. The techniques include the use of obturator cups behind the launch package, enclosed versus open front sabot designs and the use of hydrogen gas in the launch tube. Attention also had to be paid to the problem of the obturator drafting behind the model and impacting the model. Of the techniques presented, the obturator cups and hydrogen in the launch tube were successful when properly implemented

  2. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  3. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  4. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  5. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  6. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    -UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...... the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper...

  7. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  8. A review of cutting mechanics and modeling techniques for biological materials.

    Science.gov (United States)

    Takabi, Behrouz; Tai, Bruce L

    2017-07-01

    This paper presents a comprehensive survey on the modeling of tissue cutting, including both soft tissue and bone cutting processes. In order to achieve higher accuracy in tissue cutting, as a critical process in surgical operations, the meticulous modeling of such processes is important in particular for surgical tool development and analysis. This review paper is focused on the mechanical concepts and modeling techniques utilized to simulate tissue cutting such as cutting forces and chip morphology. These models are presented in two major categories, namely soft tissue cutting and bone cutting. Fracture toughness is commonly used to describe tissue cutting while Johnson-Cook material model is often adopted for bone cutting in conjunction with finite element analysis (FEA). In each section, the most recent mathematical and computational models are summarized. The differences and similarities among these models, challenges, novel techniques, and recommendations for future work are discussed along with each section. This review is aimed to provide a broad and in-depth vision of the methods suitable for tissue and bone cutting simulations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  10. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  11. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  12. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  13. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  14. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  15. Fuzzy modeling and control of rotary inverted pendulum system using LQR technique

    International Nuclear Information System (INIS)

    Fairus, M A; Mohamed, Z; Ahmad, M N

    2013-01-01

    Rotary inverted pendulum (RIP) system is a nonlinear, non-minimum phase, unstable and underactuated system. Controlling such system can be a challenge and is considered a benchmark in control theory problem. Prior to designing a controller, equations that represent the behaviour of the RIP system must be developed as accurately as possible without compromising the complexity of the equations. Through Takagi-Sugeno (T-S) fuzzy modeling technique, the nonlinear system model is then transformed into several local linear time-invariant models which are then blended together to reproduce, or approximate, the nonlinear system model within local region. A parallel distributed compensation (PDC) based fuzzy controller using linear quadratic regulator (LQR) technique is designed to control the RIP system. The results show that the designed controller able to balance the RIP system

  16. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft

  17. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  18. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  19. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  20. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  1. An experimental technique for the modelling of air flow movements in nuclear plant

    International Nuclear Information System (INIS)

    Ainsworth, R.W.; Hallas, N.J.

    1986-01-01

    This paper describes an experimental technique developed at Harwell to model ventilation flows in plant at 1/5th scale. The technique achieves dynamic similarity not only for forced convection imposed by the plant ventilation system, but also for the interaction between natural convection (from heated objects) and forced convection. The use of a scale model to study flow of fluids is a well established technique, relying upon various criteria, expressed in terms of dimensionless numbers, to achieve dynamic similarity. For forced convective flows, simulation of Reynolds number is sufficient, but to model natural convection and its interaction with forced convection, the Rayleigh, Grashof and Prandtl numbers must be simulated at the same time. This paper describes such a technique, used in experiments on a hypothetical glove box cell to study the interaction between forced and natural convection. The model contained features typically present in a cell, such as a man, motor, stairs, glove box, etc. The aim of the experiment was to study the overall flow patterns, especially around the model man 'working' at the glove box. The cell ventilation was theoretically designed to produce a downward flow over the face of the man working at the glove box. However, the results have shown that the flow velocities produced an upwards flow over the face of the man. The work has indicated the viability of modelling simultaneously the forced and natural convection processes in a cell. It has also demonstrated that simplistic assumptions cannot be made about ventilation flow patterns. (author)

  2. Experience with the Large Eddy Simulation (LES) Technique for the Modelling of Premixed and Non-premixed Combustion

    OpenAIRE

    Malalasekera, W; Ibrahim, SS; Masri, AR; Gubba, SR; Sadasivuni, SK

    2013-01-01

    Compared to RANS based combustion modelling, the Large Eddy Simulation (LES) technique has recently emerged as a more accurate and very adaptable technique in terms of handling complex turbulent interactions in combustion modelling problems. In this paper application of LES based combustion modelling technique and the validation of models in non-premixed and premixed situations are considered. Two well defined experimental configurations where high quality data are available for validation is...

  3. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  4. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    International Nuclear Information System (INIS)

    Andrei, Petru; Oniciuc, Liviu; Stancu, Alexandru; Stoleriu, Laurentiu

    2007-01-01

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented

  5. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  6. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  7. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  8. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  9. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  10. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  11. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  12. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  13. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  14. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  15. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  16. Probabilistic method/techniques of evaluation/modeling that permits to optimize/reduce the necessary resources

    International Nuclear Information System (INIS)

    Florescu, G.; Apostol, M.; Farcasiu, M.; Luminita Bedreaga, M.; Nitoi, M.; Turcu, I.

    2004-01-01

    Fault tree/event tree modeling approach is widely used in modeling and behavior simulation of nuclear structures, systems and components (NSSCs), during different condition of operation. Evaluation of NSSCs reliability, availability, risk or safety, during operation, by using probabilistic techniques, is also largely used. Development of computer capabilities offered new possibilities for large NSSCs models designing, processing and using. There are situations where large, complex and correct NSSC models are desired to be associated with rapid results/solutions/decisions or with multiple processing in order to obtain specific results. Large fault/event trees are hardly to be developed, reviewed and processed. During operation of NSSCs, the time, especially, is an important factor in taking decision. The paper presents a probabilistic method that permits evaluation/modeling of NSSCs and intents to solve the above problems by adopting appropriate techniques. The method is stated for special applications and is based on specific PSA analysis steps, information, algorithms, criteria and relations, in correspondence with the fault tree/event tree modeling and similar techniques, in order to obtain appropriate results for NSSC model analysis. Special classification of NSSCs is stated in order to reflect aspects of use of the method. Also the common reliability databases are part of information necessary to complete the analysis process. Special data and information bases contribute to state the proposed method/techniques. The paper also presents the specific steps of the method, its applicability, the main advantages and problems to be furthermore studied. The method permits optimization/reducing of resources used to perform the PSA activities. (author)

  17. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  18. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  19. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    Science.gov (United States)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  20. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  1. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  2. Viable Techniques, Leontief’s Closed Model, and Sraffa’s Subsistence Economies

    Directory of Open Access Journals (Sweden)

    Alberto Benítez

    2014-11-01

    Full Text Available This paper studies the production techniques employed in economies that reproduce themselves. Special attention is paid to the distinction usually made between those that do not produce a surplus and those that do, which are referred to as first and second class economies, respectively. Based on this, we present a new definition of viable economies and show that every viable economy of the second class can be represented as a viable economy of the first class under two different forms, Leontief‘s closed model and Sraffa’s subsistence economies. This allows us to present some remarks concerning the economic interpretation of the two models. On the one hand, we argue that the participation of each good in the production of every good can be considered as a normal characteristic of the first model and, on the other hand, we provide a justification for the same condition to be considered a characteristic of the second model. Furthermore, we discuss three definitions of viable techniques advanced by other authors and show that they differ from ours because they admit economies that do not reproduce themselves completely.

  3. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    Science.gov (United States)

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  4. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  5. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    Science.gov (United States)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  6. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  7. MODELLING THE DELAMINATION FAILURE ALONG THE CFRP-CFST BEAM INTERACTION SURFACE USING DIFFERENT FINITE ELEMENT TECHNIQUES

    Directory of Open Access Journals (Sweden)

    AHMED W. AL-ZAND

    2017-01-01

    Full Text Available Nonlinear finite element (FE models are prepared to investigate the behaviour of concrete-filled steel tube (CFST beams strengthened by carbon fibre reinforced polymer (CFRP sheets. The beams are strengthened from the bottom side only by varied sheet lengths (full and partial beam lengths and then subjected to ultimate flexural loads. Three surface interaction techniques are used to implement the bonding behaviour between the steel tube and the CFRP sheet, namely, full tie interaction (TI, cohesive element (CE and cohesive behaviour (CB techniques using ABAQUS software. Results of the comparison between the FE analysis and existing experimental study confirm that the FE models with the TI technique could be applicable for beams strengthened by CFRP sheets with a full wrapping length; the technique could not accurately implement the CFRP delamination failure, which occurred for beams with a partial wrapping length. Meanwhile, the FE models with the CE and CB techniques are applicable in the implementation of both CFRP failures (rapture and delamination for both full and partial wrapping lengths, respectively. Where, the ultimate loads' ratios achieved by the FE models using TI, CE and CB techniques about 1.122, 1.047 and 1.045, respectively, comparing to the results of existing experimental tests.

  8. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  9. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  10. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  11. Development Model of Basic Technique Skills Training Shot-Put Obrien Style Based Biomechanics Review

    Directory of Open Access Journals (Sweden)

    danang rohmat hidayanto

    2018-03-01

    Full Text Available The background of this research is the unavailability of learning model of basic technique technique of O'Brien style force that integrated in skill program based on biomechanics study which is used as a reference to build the basic technique skill of the O'Brien style force among students. The purpose of this study is to develop a model of basic-style technique of rejecting the O'Brien-style shot put based on biomechanical studies for beginner levels, including basic prefix technique, glide, final stage, repulsion, further motion and repulsion performance of O'Brien style, all of which arranged in a medium that is easily accessible whenever, by anyone and anywhere, especially in SMK Negeri 1 Kalijambe Sragen . The research method used is "Reasearch and Developement" approach. "Preliminary studies show that 43.0% of respondents considered that the O'Brien style was very important to be developed with a model of skill-based exercise based on biomechanics, as many as 40.0% ressponden stated that it is important to be developed with biomechanics based learning media. Therefore, it is deemed necessary to develop the learning media of the O'Brien style-based training skills based on biomechanical studies. Development of media starts from the design of the storyboard and script form that will be used as media. The design of this model is called the draft model. Draft models that have been prepared are reviewed by the multimedia expert and the O'Brien style expert to get the product's validity. A total of 78.24% of experts declare a viable product with some input. In small groups with n = 6, earned value 72.2% was obtained or valid enough to be tested in large groups. In the large group test with n = 12,values obtained 70.83% or quite feasible to be tested in the field. In the field test, experimental group was prepared with treatment according to media and control group with free treatment. From result of counting of significance test can be

  12. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  13. Model-driven engineering of information systems principles, techniques, and practice

    CERN Document Server

    Cretu, Liviu Gabriel

    2015-01-01

    Model-driven engineering (MDE) is the automatic production of software from simplified models of structure and functionality. It mainly involves the automation of the routine and technologically complex programming tasks, thus allowing developers to focus on the true value-adding functionality that the system needs to deliver. This book serves an overview of some of the core topics in MDE. The volume is broken into two sections offering a selection of papers that helps the reader not only understand the MDE principles and techniques, but also learn from practical examples. Also covered are the

  14. A note on the multi model super ensemble technique for reducing forecast errors

    International Nuclear Information System (INIS)

    Kantha, L.; Carniel, S.; Sclavo, M.

    2008-01-01

    The multi model super ensemble (S E) technique has been used with considerable success to improve meteorological forecasts and is now being applied to ocean models. Although the technique has been shown to produce deterministic forecasts that can be superior to the individual models in the ensemble or a simple multi model ensemble forecast, there is a clear need to understand its strengths and limitations. This paper is an attempt to do so in simple, easily understood contexts. The results demonstrate that the S E forecast is almost always better than the simple ensemble forecast, the degree of improvement depending on the properties of the models in the ensemble. However, the skill of the S E forecast with respect to the true forecast depends on a number of factors, principal among which is the skill of the models in the ensemble. As can be expected, if the ensemble consists of models with poor skill, the S E forecast will also be poor, although better than the ensemble forecast. On the other hand, the inclusion of even a single skillful model in the ensemble increases the forecast skill significantly.

  15. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  16. Modeling of high-pressure generation using the laser colliding foil technique

    Energy Technology Data Exchange (ETDEWEB)

    Fabbro, R.; Faral, B.; Virmont, J.; Cottet, F.; Romain, J.P.

    1989-03-01

    An analytical model describing the collision of two foils is presented and applied to the collision of laser-accelerated foils. Numerical simulations have been made to verify this model and to compare its results in the case of laser-accelerated foils. Scaling laws relating the different parameters (shock pressure, laser intensity, target material, etc.) have been established. The application of this technique to high-pressure equation of state experiments is then discussed.

  17. Modeling of high-pressure generation using the laser colliding foil technique

    International Nuclear Information System (INIS)

    Fabbro, R.; Faral, B.; Virmont, J.; Cottet, F.; Romain, J.P.

    1989-01-01

    An analytical model describing the collision of two foils is presented and applied to the collision of laser-accelerated foils. Numerical simulations have been made to verify this model and to compare its results in the case of laser-accelerated foils. Scaling laws relating the different parameters (shock pressure, laser intensity, target material, etc.) have been established. The application of this technique to high-pressure equation of state experiments is then discussed

  18. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  19. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  20. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  1. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  2. Determining Plutonium Mass in Spent Fuel with Nondestructive Assay Techniques -- Preliminary Modeling Results Emphasizing Integration among Techniques

    International Nuclear Information System (INIS)

    Tobin, S.J.; Fensin, M.L.; Ludewigt, B.A.; Menlove, H.O.; Quiter, B.J.; Sandoval, N.P.; Swinhoe, M.T.; Thompson, S.J.

    2009-01-01

    There are a variety of motivations for quantifying Pu in spent (used) fuel assemblies by means of nondestructive assay (NDA) including the following: strengthen the capabilities of the International Atomic Energy Agencies to safeguards nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at reprocessing facilities and providing quantitative input to burnup credit determination for repositories. For the purpose of determining the Pu mass in spent fuel assemblies, twelve NDA techniques were identified that provide information about the composition of an assembly. A key point motivating the present research path is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the elemental Pu mass of an assembly and (2) detecting the diversion of a significant number of pins. As such, the focus of this work is determining how to best integrate 2 or 3 techniques into a system that can quantify elemental Pu and to assess how well this system can detect material diversion. Furthermore, it is important economically to down-select among the various techniques before advancing to the experimental phase. In order to achieve this dual goal of integration and down-selection, a Monte Carlo library of PWR assemblies was created and is described in another paper at Global 2009 (Fensin et al.). The research presented here emphasizes integration among techniques. An overview of a five year research plan starting in 2009 is given. Preliminary modeling results for the Monte Carlo assembly library are presented for 3 NDA techniques: Delayed Neutrons, Differential Die-Away, and Nuclear Resonance Fluorescence. As part of the focus on integration, the concept of 'Pu isotopic correlation' is discussed and the role of cooling time determination.

  3. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  4. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  5. Using ecosystem modelling techniques in exposure assessments of radionuclides - an overview

    International Nuclear Information System (INIS)

    Kumblad, L.

    2005-01-01

    The risk to humans from potential releases from nuclear facilities is evaluated in safety assessments. Essential components of these assessments are exposure models, which estimate the transport of radionuclides in the environment, the uptake in biota, and transfer to humans. Recently, there has been a growing concern for radiological protection of the whole environment, not only humans, and a first attempt has been to employ model approaches based on stylized environments and transfer functions to biota based exclusively on bioconcentration factors (BCF). They are generally of a non-mechanistic nature and involve no knowledge of the actual processes involved, which is a severe limitation when assessing real ecosystems. in this paper, the possibility of using an ecological modelling approach as a complement or an alternative to the use of BCF-based models is discussed. The paper gives an overview of ecological and ecosystem modelling and examples of studies where ecosystem models have been used in association to ecological risk assessment studies for other pollutants than radionuclides. It also discusses the potential to use this technique in exposure assessments of radionuclides with a few examples from the safety assessment work performed by the Swedish nuclear fuel and waste management company (SKB). Finally there is a comparison of the characteristics of ecosystem models and traditionally exposure models for radionuclides used to estimate the radionuclide exposure of biota. The evaluation of ecosystem models already applied in safety assessments has shown that the ecosystem approach is possible to use to assess exposure to biota, and that it can handle many of the modelling problems identified related to BCF-models. The findings in this paper suggest that both national and international assessment frameworks for protection of the environment from ionising radiation would benefit from striving to adopt methodologies based on ecologically sound principles and

  6. Optimization models and techniques for implementation and pricing of electricity markets

    International Nuclear Information System (INIS)

    Madrigal Martinez, M.

    2001-01-01

    The operation and planning of vertically integrated electric power systems can be optimized using models that simulate solutions to problems. As the electric power industry is going through a period of restructuring, there is a need for new optimization tools. This thesis describes the importance of optimization tools and presents techniques for implementing them. It also presents methods for pricing primary electricity markets. Three modeling groups are studied. The first considers a simplified continuous and discrete model for power pool auctions. The second considers the unit commitment problem, and the third makes use of a new type of linear network-constrained clearing system model for daily markets for power and spinning reserve. The newly proposed model considers bids for supply and demand and bilateral contracts. It is a direct current model for the transmission network

  7. Analytical Modelling of the Effects of Different Gas Turbine Cooling Techniques on Engine Performance =

    Science.gov (United States)

    Uysal, Selcuk Can

    In this research, MATLAB SimulinkRTM was used to develop a cooled engine model for industrial gas turbines and aero-engines. The model consists of uncooled on-design, mean-line turbomachinery design and a cooled off-design analysis in order to evaluate the engine performance parameters by using operating conditions, polytropic efficiencies, material information and cooling system details. The cooling analysis algorithm involves a 2nd law analysis to calculate losses from the cooling technique applied. The model is used in a sensitivity analysis that evaluates the impacts of variations in metal Biot number, thermal barrier coating Biot number, film cooling effectiveness, internal cooling effectiveness and maximum allowable blade temperature on main engine performance parameters of aero and industrial gas turbine engines. The model is subsequently used to analyze the relative performance impact of employing Anti-Vortex Film Cooling holes (AVH) by means of data obtained for these holes by Detached Eddy Simulation-CFD Techniques that are valid for engine-like turbulence intensity conditions. Cooled blade configurations with AVH and other different external cooling techniques were used in a performance comparison study. (Abstract shortened by ProQuest.).

  8. Exact and Direct Modeling Technique for Rotor-Bearing Systems with Arbitrary Selected Degrees-of-Freedom

    Directory of Open Access Journals (Sweden)

    Shilin Chen

    1994-01-01

    Full Text Available An exact and direct modeling technique is proposed for modeling of rotor-bearing systems with arbitrary selected degrees-of-freedom. This technique is based on the combination of the transfer and dynamic stiffness matrices. The technique differs from the usual combination methods in that the global dynamic stiffness matrix for the system or the subsystem is obtained directly by rearranging the corresponding global transfer matrix. Therefore, the dimension of the global dynamic stiffness matrix is independent of the number of the elements or the substructures. In order to show the simplicity and efficiency of the method, two numerical examples are given.

  9. The Effect of Group Investigation Learning Model with Brainstroming Technique on Students Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Astiti Kade kAyu

    2018-01-01

    Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.

  10. Application of data assimilation technique for flow field simulation for Kaiga site using TAPM model

    International Nuclear Information System (INIS)

    Shrivastava, R.; Oza, R.B.; Puranik, V.D.; Hegde, M.N.; Kushwaha, H.S.

    2008-01-01

    The data assimilation techniques are becoming popular nowadays to get realistic flow field simulation for the site under consideration. The present paper describes data assimilation technique for flow field simulation for Kaiga site using the air pollution model (TAPM) developed by CSIRO, Australia. In this, the TAPM model was run for Kaiga site for a period of one month (Nov. 2004) using the analysed meteorological data supplied with the model for Central Asian (CAS) region and the model solutions were nudged with the observed wind speed and wind direction data available for the site. The model was run with 4 nested grids with grid spacing varying from 30km, 10km, 3km and 1km respectively. The models generated results with and without nudging are statistically compared with the observations. (author)

  11. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo

  12. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  13. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  14. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  15. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    Energy Technology Data Exchange (ETDEWEB)

    Barus, R. P. P., E-mail: rismawan.ppb@gmail.com [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung and Centre for Material and Technical Product, Jalan Sangkuriang No. 14 Bandung (Indonesia); Tjokronegoro, H. A.; Leksono, E. [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia); Ismunandar [Chemistry Study, Faculty of Mathematics and Science, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia)

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  16. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    International Nuclear Information System (INIS)

    Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar

    2014-01-01

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range

  17. A Dynamic Operation Permission Technique Based on an MFM Model and Numerical Simulation

    International Nuclear Information System (INIS)

    Akio, Gofuku; Masahiro, Yonemura

    2011-01-01

    It is important to support operator activities to an abnormal plant situation where many counter actions are taken in relatively short time. The authors proposed a technique called dynamic operation permission to decrease human errors without eliminating creative idea of operators to cope with an abnormal plant situation by checking if the counter action taken is consistent with emergency operation procedure. If the counter action is inconsistent, a dynamic operation permission system warns it to operators. It also explains how and why the counter action is inconsistent and what influence will appear on the future plant behavior by a qualitative influence inference technique based on a model by the Mf (Multilevel Flow Modeling). However, the previous dynamic operation permission is not able to explain quantitative effects on plant future behavior. Moreover, many possible influence paths are derived because a qualitative reasoning does not give a solution when positive and negative influences are propagated to the same node. This study extends the dynamic operation permission by combining the qualitative reasoning and the numerical simulation technique. The qualitative reasoning based on an Mf model of plant derives all possible influence propagation paths. Then, a numerical simulation gives a prediction of plant future behavior in the case of taking a counter action. The influence propagation that does not coincide with the simulation results is excluded from possible influence paths. The extended technique is implemented in a dynamic operation permission system for an oil refinery plant. An MFM model and a static numerical simulator are developed. The results of dynamic operation permission for some abnormal plant situations show the improvement of the accuracy of dynamic operation permission and the quality of explanation for the effects of the counter action taken

  18. A new cerebral vasospasm model established with endovascular puncture technique

    International Nuclear Information System (INIS)

    Tu Jianfei; Liu Yizhi; Ji Jiansong; Zhao Zhongwei

    2011-01-01

    Objective: To investigate the method of establishing cerebral vasospasm (CVS) models in rabbits by using endovascular puncture technique. Methods: Endovascular puncture procedure was performed in 78 New Zealand white rabbits to produce subarachnoid hemorrhage (SAH). The survival rabbits were randomly divided into seven groups (3 h, 12 h, 1 d, 2 d, 3 d, 7 d and 14 d), with five rabbits in each group for both study group (SAH group) and control group. Cerebral CT scanning was carried out in all rabbits both before and after the operation. The inner diameter and the thickness of vascular wall of both posterior communicating artery (PcoA) and basilar artery (BA) were determined after the animals were sacrificed, and the results were analyzed. Results: Of 78 experimental rabbits, CVS model was successfully established in 45, including 35 of SAH group and 10 control subgroup. The technical success rate was 57.7%. Twelve hours after the procedure, the inner diameter of PcoA and BA in SAH group was decreased by 45.6% and 52.3%, respectively, when compared with these in control group. The vascular narrowing showed biphasic changes, the inner diameter markedly decreased again at the 7th day when the decrease reached its peak to 31.2% and 48.6%, respectively. Conclusion: Endovascular puncture technique is an effective method to establish CVS models in rabbits. The death rate of experimental animals can be decreased if new interventional material is used and the manipulation is carefully performed. (authors)

  19. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  20. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  1. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    Science.gov (United States)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  2. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  3. Identifying and quantifying energy savings on fired plant using low cost modelling techniques

    International Nuclear Information System (INIS)

    Tucker, Robert; Ward, John

    2012-01-01

    Research highlights: → Furnace models based on the zone method for radiation calculation are described. → Validated steady-state and transient models have been developed. → We show how these simple models can identify the best options for saving energy. → High emissivity coatings predicted to give performance enhancement on a fired heater. → Optimal heat recovery strategies on a steel reheating furnace are predicted. -- Abstract: Combustion in fired heaters, boilers and furnaces often accounts for the major energy consumption on industrial processes. Small improvements in efficiency can result in large reductions in energy consumption, CO 2 emissions, and operating costs. This paper will describe some useful low cost modelling techniques based on the zone method to help identify energy saving opportunities on high temperature fuel-fired process plant. The zone method has for many decades, been successfully applied to small batch furnaces through to large steel-reheating furnaces, glass tanks, boilers and fired heaters on petrochemical plant. Zone models can simulate both steady-state furnace operation and more complex transient operation typical of a production environment. These models can be used to predict thermal efficiency and performance, and more importantly, to assist in identifying and predicting energy saving opportunities from such measures as: ·Improving air/fuel ratio and temperature controls. ·Improved insulation. ·Use of oxygen or oxygen enrichment. ·Air preheating via flue gas heat recovery. ·Modification to furnace geometry and hearth loading. There is also increasing interest in the application of refractory coatings for increasing surface radiation in fired plant. All of the techniques can yield savings ranging from a few percent upwards and can deliver rapid financial payback, but their evaluation often requires robust and reliable models in order to increase confidence in making financial investment decisions. This paper gives

  4. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  5. Robotic and endoscopic transoral thyroidectomy: feasibility and description of the technique in the cadaveric model.

    Science.gov (United States)

    Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad; Berber, Eren

    2017-12-01

    Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility.

  6. New horizontal global solar radiation estimation models for Turkey based on robust coplot supported genetic programming technique

    International Nuclear Information System (INIS)

    Demirhan, Haydar; Kayhan Atilgan, Yasemin

    2015-01-01

    Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and

  7. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  8. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  9. Uranium exploration techniques

    International Nuclear Information System (INIS)

    Nichols, C.E.

    1984-01-01

    The subject is discussed under the headings: introduction (genetic description of some uranium deposits; typical concentrations of uranium in the natural environment); sedimentary host rocks (sandstones; tabular deposits; roll-front deposits; black shales); metamorphic host rocks (exploration techniques); geologic techniques (alteration features in sandstones; favourable features in metamorphic rocks); geophysical techniques (radiometric surveys; surface vehicle methods; airborne methods; input surveys); geochemical techniques (hydrogeochemistry; petrogeochemistry; stream sediment geochemistry; pedogeochemistry; emanometry; biogeochemistry); geochemical model for roll-front deposits; geologic model for vein-like deposits. (U.K.)

  10. On the Reliability of Nonlinear Modeling using Enhanced Genetic Programming Techniques

    Science.gov (United States)

    Winkler, S. M.; Affenzeller, M.; Wagner, S.

    The use of genetic programming (GP) in nonlinear system identification enables the automated search for mathematical models that are evolved by an evolutionary process using the principles of selection, crossover and mutation. Due to the stochastic element that is intrinsic to any evolutionary process, GP cannot guarantee the generation of similar or even equal models in each GP process execution; still, if there is a physical model underlying to the data that are analyzed, then GP is expected to find these structures and produce somehow similar results. In this paper we define a function for measuring the syntactic similarity of mathematical models represented as structure trees; using this similarity function we compare the results produced by GP techniques for a data set representing measurement data of a BMW Diesel engine.

  11. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  12. VLF surface-impedance modelling techniques for coal exploration

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.; Thiel, D.; O' Keefe, S. [Central Queensland University, Rockhampton, Qld. (Australia). Faculty of Engineering and Physical Systems

    2000-10-01

    New and efficient computational techniques are required for geophysical investigations of coal. This will allow automated inverse analysis procedures to be used for interpretation of field data. In this paper, a number of methods of modelling electromagnetic surface impedance measurements are reviewed, particularly as applied to typical coal seam geology found in the Bowen Basin. At present, the Impedance method and the finite-difference time-domain (FDTD) method appear to offer viable solutions although both have problems. The Impedance method is currently slightly inaccurate, and the FDTD method has large computational demands. In this paper both methods are described and results are presented for a number of geological targets. 17 refs., 14 figs.

  13. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  14. New model reduction technique for a class of parabolic partial differential equations

    NARCIS (Netherlands)

    Vajta, Miklos

    1991-01-01

    A model reduction (or lumping) technique for a class of parabolic-type partial differential equations is given, and its application is discussed. The frequency response of the temperature distribution in any multilayer solid is developed and given by a matrix expression. The distributed transfer

  15. Evaluation of inverse modeling techniques for pinpointing water leakages at building constructions

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2015-01-01

    The location and nature of the moisture leakages are sometimes difficult to detect. Moreover, the relation between observed inside surface moisture patterns and where the moisture enters the construction is often not clear. The objective of this paper is to investigate inverse modeling techniques as

  16. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  17. The Role of Flow Diagnostic Techniques in Fan and Open Rotor Noise Modeling

    Science.gov (United States)

    Envia, Edmane

    2016-01-01

    A principal source of turbomachinery noise is the interaction of the rotating and stationary blade rows with the perturbations in the airstream through the engine. As such, a lot of research has been devoted to the study of the turbomachinery noise generation mechanisms. This is particularly true of fan and open rotors, both of which are the major contributors to the overall noise output of modern aircraft engines. Much of the research in fan and open rotor noise has been focused on developing theoretical models for predicting their noise characteristics. These models, which run the gamut from the semi-empirical to fully computational ones, are, in one form or another, informed by the description of the unsteady flow-field in which the propulsors (i.e., the fan and open rotors) operate. Not surprisingly, the fidelity of the theoretical models is dependent, to a large extent, on capturing the nuances of the unsteady flowfield that have a direct role in the noise generation process. As such, flow diagnostic techniques have proven to be indispensible in identifying the shortcoming of theoretical models and in helping to improve them. This presentation will provide a few examples of the role of flow diagnostic techniques in assessing the fidelity and robustness of the fan and open rotor noise prediction models.

  18. Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques

    Directory of Open Access Journals (Sweden)

    Linping Xiong

    2012-01-01

    Full Text Available China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China’s medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  19. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    Science.gov (United States)

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Advanced particle-in-cell simulation techniques for modeling the Lockheed Martin Compact Fusion Reactor

    Science.gov (United States)

    Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David

    2017-10-01

    We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.

  1. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.; Hoteit, Ibrahim; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A.; Schumacher, M.; Pattiaratchi, C.

    2017-01-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques

  2. A local isotropic/global orthotropic finite element technique for modeling the crush of wood in impact limiters

    International Nuclear Information System (INIS)

    Attaway, S.W.; Yoshimura, H.R.

    1989-01-01

    Wood is often used as the energy absorbing material in impact limiters, because it begins to crush at low strains, then maintains a near constant crush stress up to nearly 60% volume reduction, and then locks up. Hill (Hill and Joseph, 1974) has performed tests that show that wood is an excellent absorber. However, wood's orthotropic behavior for large crush is difficult to model. In the past, analysts have used isotropic foam-like material models for modeling wood. A new finite element technique is presented in this paper that gives a better model of wood crush than the model currently in use. The orthotropic technique is based on locally isotropic, but globally orthotropic (LIGO) (Attaway, 1988) assumptions in which alternating layers of hard and soft crushable material are used. Each layer is isotropic; however, by alternating hard and soft thin layers, the resulting global behavior is orthotropic. In the remainder of this paper, the new technique for modeling orthotropic wood crush will be presented. The model is used to predict the crush behavior for different grain orientations of balsa wood. As an example problem, an impact limiter containing balsa wood as the crushable material is analyzed using both an isotropic model and the LIGO model

  3. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a cla...... during the economic crisis 2007–2009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other....

  4. Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques

    Science.gov (United States)

    Clancy, Thomas C.; Gates, Thomas S.

    2005-01-01

    The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.

  5. A new technique for measuring aerosols with moonlight observations and a sky background model

    Science.gov (United States)

    Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie

    2014-05-01

    There have been an ample number of studies on aerosols in urban, daylight conditions, but few for remote, nocturnal aerosols. We have developed a new technique for investigating such aerosols using our sky background model and astronomical observations. With a dedicated observing proposal we have successfully tested this technique for nocturnal, remote aerosol studies. This technique relies on three requirements: (a) sky background model, (b) observations taken with scattered moonlight, and (c) spectrophotometric standard star observations for flux calibrations. The sky background model was developed for the European Southern Observatory and is optimized for the Very Large Telescope at Cerro Paranal in the Atacama desert in Chile. This is a remote location with almost no urban aerosols. It is well suited for studying remote background aerosols that are normally difficult to detect. Our sky background model has an uncertainty of around 20 percent and the scattered moonlight portion is even more accurate. The last two requirements are having astronomical observations with moonlight and of standard stars at different airmasses, all during the same night. We had a dedicated observing proposal at Cerro Paranal with the instrument X-Shooter to use as a case study for this method. X-Shooter is a medium resolution, echelle spectrograph which covers the wavelengths from 0.3 to 2.5 micrometers. We observed plain sky at six different distances (7, 13, 20, 45, 90, and 110 degrees) to the Moon for three different Moon phases (between full and half). Also direct observations of spectrophotometric standard stars were taken at two different airmasses for each night to measure the extinction curve via the Langley method. This is an ideal data set for testing this technique. The underlying assumption is that all components, other than the atmospheric conditions (specifically aerosols and airglow), can be calculated with the model for the given observing parameters. The scattered

  6. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  7. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  8. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  10. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  11. Continuous Measurements and Quantitative Constraints: Challenge Problems for Discrete Modeling Techniques

    Science.gov (United States)

    Goodrich, Charles H.; Kurien, James; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present some diagnosis and control problems that are difficult to solve with discrete or purely qualitative techniques. We analyze the nature of the problems, classify them and explain why they are frequently encountered in systems with closed loop control. This paper illustrates the problem with several examples drawn from industrial and aerospace applications and presents detailed information on one important application: In-Situ Resource Utilization (ISRU) on Mars. The model for an ISRU plant is analyzed showing where qualitative techniques are inadequate to identify certain failure modes and to maintain control of the system in degraded environments. We show why the solution to the problem will result in significantly more robust and reliable control systems. Finally, we illustrate requirements for a solution to the problem by means of examples.

  12. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  13. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  14. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  15. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  16. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  17. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  18. Next generation initiation techniques

    Science.gov (United States)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The

  19. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    International Nuclear Information System (INIS)

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  20. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  1. 3-D thermo-mechanical laboratory modeling of plate-tectonics: modeling scheme, technique and first experiments

    Directory of Open Access Journals (Sweden)

    D. Boutelier

    2011-05-01

    Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.

  2. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    . The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...... that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem...

  3. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  6. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    Science.gov (United States)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance

  7. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  8. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  9. BIOMEHANICAL MODEL OF THE GOLF SWING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2011-08-01

    Full Text Available Golf is an extremely complex game which depends on a number of interconnected factors. One of the most important elements is undoubtedly the golf swing technique. High performance of the golf swing technique is generated by: the level of motor abilities, high degree of movement control, the level of movement structure stabilisation, morphological characteristics, inter- and intro-muscular coordination, motivation, and concentration. The golf swing technique was investigated using the biomechanical analysis method. Kinematic parameters were registered using two synchronised high-speed cameras at a frequency of 2,000 Hz. The sample of subjects consisted of three professional golf players. The study results showed a relatively high variability of the swing technique. The maximum velocity of the ball after a wood swing ranged from 233 to 227 km/h. The velocity of the ball after an iron swing was lower by 10 km/h on average. The elevation angle of the ball ranged from 11.7 to 15.3 degrees. In the final phase of the golf swing, i.e. downswing, the trunk rotators play the key role.

  10. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  11. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  12. Increasing the Intelligence of Virtual Sales Assistants through Knowledge Modeling Techniques

    OpenAIRE

    Molina, Martin

    2001-01-01

    Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the nee...

  13. In silico modeling techniques for predicting the tertiary structure of human H4 receptor.

    Science.gov (United States)

    Zaid, Hilal; Raiyn, Jamal; Osman, Midhat; Falah, Mizied; Srouji, Samer; Rayan, Anwar

    2016-01-01

    First cloned in 2000, the human Histamine H4 Receptor (hH4R) is the last member of the histamine receptors family discovered so far, it belongs to the GPCR super-family and is involved in a wide variety of immunological and inflammatory responses. Potential hH4R antagonists are proposed to have therapeutic potential for the treatment of allergies, inflammation, asthma and colitis. So far, no hH4R ligands have been successfully introduced to the pharmaceutical market, which creates a strong demand for new selective ligands to be developed. in silico techniques and structural based modeling are likely to facilitate the achievement of this goal. In this review paper we attempt to cover the fundamental concepts of hH4R structure modeling and its implementations in drug discovery and development, especially those that have been experimentally tested and to highlight some ideas that are currently being discussed on the dynamic nature of hH4R and GPCRs, in regards to computerized techniques for 3-D structure modeling.

  14. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  15. Measurement and modeling of out-of-field doses from various advanced post-mastectomy radiotherapy techniques

    Science.gov (United States)

    Yoon, Jihyung; Heins, David; Zhao, Xiaodong; Sanders, Mary; Zhang, Rui

    2017-12-01

    More and more advanced radiotherapy techniques have been adopted for post-mastectomy radiotherapies (PMRT). Patient dose reconstruction is challenging for these advanced techniques because they increase the low out-of-field dose area while the accuracy of out-of-field dose calculations by current commercial treatment planning systems (TPSs) is poor. We aim to measure and model the out-of-field radiation doses from various advanced PMRT techniques. PMRT treatment plans for an anthropomorphic phantom were generated, including volumetric modulated arc therapy with standard and flattening-filter-free photon beams, mixed beam therapy, 4-field intensity modulated radiation therapy (IMRT), and tomotherapy. We measured doses in the phantom where the TPS calculated doses were lower than 5% of the prescription dose using thermoluminescent dosimeters (TLD). The TLD measurements were corrected by two additional energy correction factors, namely out-of-beam out-of-field (OBOF) correction factor K OBOF and in-beam out-of-field (IBOF) correction factor K IBOF, which were determined by separate measurements using an ion chamber and TLD. A simple analytical model was developed to predict out-of-field dose as a function of distance from the field edge for each PMRT technique. The root mean square discrepancies between measured and calculated out-of-field doses were within 0.66 cGy Gy-1 for all techniques. The IBOF doses were highly scattered and should be evaluated case by case. One can easily combine the measured out-of-field dose here with the in-field dose calculated by the local TPS to reconstruct organ doses for a specific PMRT patient if the same treatment apparatus and technique were used.

  16. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  17. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  18. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    Science.gov (United States)

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  20. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  1. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  2. Image acquisition and planimetry systems to develop wounding techniques in 3D wound model

    Directory of Open Access Journals (Sweden)

    Kiefer Ann-Kathrin

    2017-09-01

    Full Text Available Wound healing represents a complex biological repair process. Established 2D monolayers and wounding techniques investigate cell migration, but do not represent coordinated multi-cellular systems. We aim to use wound surface area measurements obtained from image acquisition and planimetry systems to establish our wounding technique and in vitro organotypic tissue. These systems will be used in our future wound healing treatment studies to assess the rate of wound closure in response to wound healing treatment with light therapy (photobiomodulation. The image acquisition and planimetry systems were developed, calibrated, and verified to measure wound surface area in vitro. The system consists of a recording system (Sony DSC HX60, 20.4 M Pixel, 1/2.3″ CMOS sensor and calibrated with 1mm scale paper. Macro photography with an optical zoom magnification of 2:1 achieves sufficient resolution to evaluate the 3mm wound size and healing growth. The camera system was leveled with an aluminum construction to ensure constant distance and orientation of the images. The JPG-format images were processed with a planimetry system in MATLAB. Edge detection enables definition of the wounded area. Wound area can be calculated with surface integrals. To separate the wounded area from the background, the image was filtered in several steps. Agar models, injured through several test persons with different levels of experience, were used as pilot data to test the planimetry software. These image acquisition and planimetry systems support the development of our wound healing research. The reproducibility of our wounding technique can be assessed by the variability in initial wound surface area. Also, wound healing treatment effects can be assessed by the change in rate of wound closure. These techniques represent the foundations of our wound model, wounding technique, and analysis systems in our ongoing studies in wound healing and therapy.

  3. Proposal of a congestion control technique in LAN networks using an econometric model ARIMA

    Directory of Open Access Journals (Sweden)

    Joaquín F Sánchez

    2017-01-01

    Full Text Available Hasty software development can produce immediate implementations with source code unnecessarily complex and hardly readable. These small kinds of software decay generate a technical debt that could be big enough to seriously affect future maintenance activities. This work presents an analysis technique for identifying architectural technical debt related to non-uniformity of naming patterns; the technique is based on term frequency over package hierarchies. The proposal has been evaluated on projects of two popular organizations, Apache and Eclipse. The results have shown that most of the projects have frequent occurrences of the proposed naming patterns, and using a graph model and aggregated data could enable the elaboration of simple queries for debt identification. The technique has features that favor its applicability on emergent architectures and agile software development.

  4. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    Science.gov (United States)

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  5. Comparison of lung tumor motion measured using a model-based 4DCT technique and a commercial protocol.

    Science.gov (United States)

    O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A

    2017-11-11

    To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017

  6. An experimental evaluation of the generalizing capabilities of process discovery techniques and black-box sequence models

    NARCIS (Netherlands)

    Tax, N.; van Zelst, S.J.; Teinemaa, I.; Gulden, Jens; Reinhartz-Berger, Iris; Schmidt, Rainer; Guerreiro, Sérgio; Guédria, Wided; Bera, Palash

    2018-01-01

    A plethora of automated process discovery techniques have been developed which aim to discover a process model based on event data originating from the execution of business processes. The aim of the discovered process models is to describe the control-flow of the underlying business process. At the

  7. A Continuous Dynamic Traffic Assignment Model From Plate Scanning Technique

    Energy Technology Data Exchange (ETDEWEB)

    Rivas, A.; Gallego, I.; Sanchez-Cambronero, S.; Ruiz-Ripoll, L.; Barba, R.M.

    2016-07-01

    This paper presents a methodology for the dynamic estimation of traffic flows on all links of a network from observable field data assuming the first-in-first-out (FIFO) hypothesis. The traffic flow intensities recorded at the exit of the scanned links are propagated to obtain the flow waves on unscanned links. For that, the model calculates the flow-cost functions through information registered with the plate scanning technique. The model also responds to the concern about the parameter quality of flow-cost functions to replicate the real traffic flow behaviour. It includes a new algorithm for the adjustment of the parameter values to link characteristics when its quality is questionable. For that, it is necessary the a priori study of the location of the scanning devices to identify all path flows and to measure travel times in all links. A synthetic network is used to illustrate the proposed method and to prove its usefulness and feasibility. (Author)

  8. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The

  9. Development og groundwater flow modeling techniques for the low-level radwaste disposal (III)

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Dae-Seok; Kim, Chun-Soo; Kim, Kyung-Soo; Park, Byung-Yoon; Koh, Yong-Kweon; Park, Hyun-Soo [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-12-01

    The project amis to establish the methodology of hydrogeologic assessment by the field application of the evaluation techniques gained and accumulated from the previous hydrogeological research works in Korea. The results of the project and their possible areas for application are (1) acquisition of detailed hydrogeologic information by using a borehole televiewer and a multipacker system, (2) establishing an integrated hydrogeological assessment method for fractured rocks, (3) acquisition of the fracture parameters for fracture modeling, (4) an inversion analysis of hydraulic parameters from fracture network modeling, (5) geostatistical methods for the spatial assignment of hydraulic parameters for fractured rocks, and (6) establishing the groundwater flow modeling procedure for a repository. 75 refs., 72 figs., 34 tabs. (Author)

  10. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    Oliveira, Mauro V.; Schirru, Roberto

    2000-01-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  11. Application of neural network technique to determine a corrector surface for global geopotential model using GPS/levelling measurements in Egypt

    Science.gov (United States)

    Elshambaky, Hossam Talaat

    2018-01-01

    Owing to the appearance of many global geopotential models, it is necessary to determine the most appropriate model for use in Egyptian territory. In this study, we aim to investigate three global models, namely EGM2008, EIGEN-6c4, and GECO. We use five mathematical transformation techniques, i.e., polynomial expression, exponential regression, least-squares collocation, multilayer feed forward neural network, and radial basis neural networks to make the conversion from regional geometrical geoid to global geoid models and vice versa. From a statistical comparison study based on quality indexes between previous transformation techniques, we confirm that the multilayer feed forward neural network with two neurons is the most accurate of the examined transformation technique, and based on the mean tide condition, EGM2008 represents the most suitable global geopotential model for use in Egyptian territory to date. The final product gained from this study was the corrector surface that was used to facilitate the transformation process between regional geometrical geoid model and the global geoid model.

  12. A time-dependent event tree technique for modelling recovery operations

    International Nuclear Information System (INIS)

    Kohut, P.; Fitzpatrick, R.

    1991-01-01

    The development of a simplified time dependent event tree methodology is presented. The technique is especially applicable to describe recovery operations in nuclear reactor accident scenarios initiated by support system failures. The event tree logic is constructed using time dependent top events combined with a damage function that contains information about the final state time behavior of the reactor core. Both the failure and the success states may be utilized for the analysis. The method is illustrated by modeling the loss of service water function with special emphasis on the RCP [reactor coolant pump] seal LOCA [loss of coolant accident] scenario. 5 refs., 2 figs., 2 tabs

  13. Construction of an experimental simplified model for determining of flow parameters in chemical reactors, using nuclear techniques

    International Nuclear Information System (INIS)

    Araujo Paiva, J.A. de.

    1981-03-01

    The development of a simplified experimental model for investigation of nuclear techniques to determine the solid phase parameters in gas-solid flows is presented. A method for the measurement of the solid phase residence time inside a chemical reactor of the type utilised in the cracking process of catalytic fluids is described. An appropriate radioactive labelling technique of the solid phase and the construction of an eletronic timing circuit were the principal stages in the definition of measurement technique. (Author) [pt

  14. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  15. Full Core modeling techniques for research reactors with irregular geometries using Serpent and PARCS applied to the CROCUS reactor

    International Nuclear Information System (INIS)

    Siefman, Daniel J.; Girardin, Gaëtan; Rais, Adolfo; Pautz, Andreas; Hursin, Mathieu

    2015-01-01

    Highlights: • Modeling of research reactors. • Serpent and PARCS coupling. • Lattice physics codes modeling techniques. - Abstract: This paper summarizes the results of modeling methodologies developed for the zero-power (100 W) teaching and research reactor CROCUS located in the Laboratory for Reactor Physics and Systems Behavior (LRS) at the Swiss Federal Institute of Technology in Lausanne (EPFL). The study gives evidence that the Monte Carlo code Serpent can be used effectively as a lattice physics tool for small reactors. CROCUS’ core has an irregular geometry with two fuel zones of different lattice pitches. This and the reactor’s small size necessitate the use of nonstandard cross-section homogenization techniques when modeling the full core with a 3D nodal diffusion code (e.g. PARCS). The primary goal of this work is the development of these techniques for steady-state neutronics and future transient neutronics analyses of not only CROCUS, but research reactors in general. In addition, the modeling methods can provide useful insight for analyzing small modular reactor concepts based on light water technology. Static computational models of CROCUS with the codes Serpent and MCNP5 are presented and methodologies are analyzed for using Serpent and SerpentXS to prepare macroscopic homogenized group cross-sections for a pin-by-pin model of CROCUS with PARCS. The most accurate homogenization scheme lead to a difference in terms of k eff of 385 pcm between the Serpent and PARCS model, while the MCNP5 and Serpent models differed in terms of k eff by 13 pcm (within the statistical error of each simulation). Comparisons of the axial power profiles between the Serpent model as a reference and a set of PARCS models using different homogenization techniques showed a consistent root-mean-square deviation of ∼8%, indicating that the differences are not due to the homogenization technique but rather arise from the definition of the diffusion coefficients

  16. Application of Tissue Culture and Transformation Techniques in Model Species Brachypodium distachyon.

    Science.gov (United States)

    Sogutmaz Ozdemir, Bahar; Budak, Hikmet

    2018-01-01

    Brachypodium distachyon has recently emerged as a model plant species for the grass family (Poaceae) that includes major cereal crops and forage grasses. One of the important traits of a model species is its capacity to be transformed and ease of growing both in tissue culture and in greenhouse conditions. Hence, plant transformation technology is crucial for improvements in agricultural studies, both for the study of new genes and in the production of new transgenic plant species. In this chapter, we review an efficient tissue culture and two different transformation systems for Brachypodium using most commonly preferred gene transfer techniques in plant species, microprojectile bombardment method (biolistics) and Agrobacterium-mediated transformation.In plant transformation studies, frequently used explant materials are immature embryos due to their higher transformation efficiencies and regeneration capacity. However, mature embryos are available throughout the year in contrast to immature embryos. We explain a tissue culture protocol for Brachypodium using mature embryos with the selected inbred lines from our collection. Embryogenic calluses obtained from mature embryos are used to transform Brachypodium with both plant transformation techniques that are revised according to previously studied protocols applied in the grasses, such as applying vacuum infiltration, different wounding effects, modification in inoculation and cocultivation steps or optimization of bombardment parameters.

  17. Establishment of atherosclerotic model and USPIO enhanced MRI techniques study in rabbits

    International Nuclear Information System (INIS)

    Li Yonggang; Zhu Mo; Dai Yinyu; Chen Jianhua; Guo Liang; Ni Jiankun

    2010-01-01

    Objective: To explore the methods of establishment of atherosclerotic model and USPIO enhanced MRI techniques in rabbits. Methods: Thirty New Zealand male rabbits were divided randomly into two groups: 20 animals in the experiment group, 10 animals in the control group. Animal model of atherosclerosis was induced with aortic balloon endothelial injury and high-fat diet feeding. There was no intervention with the rabbits in control group. MRI examination included plan scan, USPIO enhanced black-blood sequences and white-blood sequence. The features of the plaques was analyzed in the experimental group and the effection on the image quality of different coils, sequences and parameters and a statistical study was also analyzed. Results: Animal model of atherosclerosis was successfully made in 12 rabbits and most plaques located in the abdomen aorta. There were 86 plaques within the scanning scope among which 67 plaques were positive to the Prussian blue staining. The image quality of knee joint coil was better than that of other coils. Although there was no difference in the detection of numbers of AS plaques between USPIO enhanced black-blood sequences and white-blood sequence (P > 0.05), blackblood sequences was superior to white-blood sequence in the demonstration of the components of plaque. Conclusion: The method of aortic balloon endothelial injury and high-fat diet feeding can easily establish the AS model in rabbits with a shorter period and it may be used for controlling the location of the plaques. USPIO enhanced MRI sequences has high sensitivity in the detection of the AS plauqes and can reveal the component of AS plaques. The optimization of MRI techniques is very important in the improvement of the image quality and the detection of the plaques. (authors)

  18. Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.

    Science.gov (United States)

    Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon

    2017-05-31

    Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are

  19. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  20. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  1. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

  2. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

    Science.gov (United States)

    Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

    2015-01-01

    Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

  3. Towards representing human behavior and decision making in Earth system models - an overview of techniques and approaches

    Science.gov (United States)

    Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-11-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.

  4. Development of Ultrasonic Modeling Techniques for the Study of Crustal Inhomogeneities.

    Science.gov (United States)

    1983-08-01

    layer material consisted of Carnauba wax and silica powder. A 2% (by weight) amount of beeswax was added to the middle layer material to reduce the...t 4i ci 0 ci ( a) Yn 4 J 41 E940 G) -4 C iiH U’ c W -1 >. a~ u 00 hard carnauba wax dominate the Rayleiqh velocity to a Ireat extent; the RzvlIqh...and tested to evaluate our seismic ultrasonic modeling technique. A 2.3 mm thick layer composed of the carnauba wax mixture was deposited on a

  5. Continuous Modeling Technique of Fiber Pullout from a Cement Matrix with Different Interface Mechanical Properties Using Finite Element Program

    Directory of Open Access Journals (Sweden)

    Leandro Ferreira Friedrich

    Full Text Available Abstract Fiber-matrix interface performance has a great influence on the mechanical properties of fiber reinforced composite. This influence is mainly presented during fiber pullout from the matrix. As fiber pullout process consists of fiber debonding stage and pullout stage which involve complex contact problem, numerical modeling is a best way to investigate the interface influence. Although many numerical research works have been conducted, practical and effective technique suitable for continuous modeling of fiber pullout process is still scarce. The reason is in that numerical divergence frequently happens, leading to the modeling interruption. By interacting the popular finite element program ANSYS with the MATLAB, we proposed continuous modeling technique and realized modeling of fiber pullout from cement matrix with desired interface mechanical performance. For debonding process, we used interface elements with cohesive surface traction and exponential failure behavior. For pullout process, we switched interface elements to spring elements with variable stiffness, which is related to the interface shear stress as a function of the interface slip displacement. For both processes, the results obtained are very good in comparison with other numerical or analytical models and experimental tests. We suggest using the present technique to model toughening achieved by randomly distributed fibers.

  6. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  7. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  8. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  9. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1978-01-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  10. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  11. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1979-02-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  12. Comparison of Grid Nudging and Spectral Nudging Techniques for Dynamical Climate Downscaling within the WRF Model

    Science.gov (United States)

    Fan, X.; Chen, L.; Ma, Z.

    2010-12-01

    Climate downscaling has been an active research and application area in the past several decades focusing on regional climate studies. Dynamical downscaling, in addition to statistical methods, has been widely used in downscaling as the advanced modern numerical weather and regional climate models emerge. The utilization of numerical models enables that a full set of climate variables are generated in the process of downscaling, which are dynamically consistent due to the constraints of physical laws. While we are generating high resolution regional climate, the large scale climate patterns should be retained. To serve this purpose, nudging techniques, including grid analysis nudging and spectral nudging, have been used in different models. There are studies demonstrating the benefit and advantages of each nudging technique; however, the results are sensitive to many factors such as nudging coefficients and the amount of information to nudge to, and thus the conclusions are controversy. While in a companion work of developing approaches for quantitative assessment of the downscaled climate, in this study, the two nudging techniques are under extensive experiments in the Weather Research and Forecasting (WRF) model. Using the same model provides fair comparability. Applying the quantitative assessments provides objectiveness of comparison. Three types of downscaling experiments were performed for one month of choice. The first type is serving as a base whereas the large scale information is communicated through lateral boundary conditions only; the second is using the grid analysis nudging; and the third is using spectral nudging. Emphases are given to the experiments of different nudging coefficients and nudging to different variables in the grid analysis nudging; while in spectral nudging, we focus on testing the nudging coefficients, different wave numbers on different model levels to nudge.

  13. Sterile insect technique: A model for dose optimisation for improved sterile insect quality

    International Nuclear Information System (INIS)

    Parker, A.; Mehta, K.

    2007-01-01

    The sterile insect technique (SIT) is an environment-friendly pest control technique with application in the area-wide integrated control of key pests, including the suppression or elimination of introduced populations and the exclusion of new introductions. Reproductive sterility is normally induced by ionizing radiation, a convenient and consistent method that maintains a reasonable degree of competitiveness in the released insects. The cost and effectiveness of a control program integrating the SIT depend on the balance between sterility and competitiveness, but it appears that current operational programs with an SIT component are not achieving an appropriate balance. In this paper we discuss optimization of the sterilization process and present a simple model and procedure for determining the optimum dose. (author) [es

  14. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  15. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  16. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  17. DEVELOPMENT OF RESERVOIR CHARACTERIZATION TECHNIQUES AND PRODUCTION MODELS FOR EXPLOITING NATURALLY FRACTURED RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Michael L. Wiggins; Raymon L. Brown; Faruk Civan; Richard G. Hughes

    2002-12-31

    For many years, geoscientists and engineers have undertaken research to characterize naturally fractured reservoirs. Geoscientists have focused on understanding the process of fracturing and the subsequent measurement and description of fracture characteristics. Engineers have concentrated on the fluid flow behavior in the fracture-porous media system and the development of models to predict the hydrocarbon production from these complex systems. This research attempts to integrate these two complementary views to develop a quantitative reservoir characterization methodology and flow performance model for naturally fractured reservoirs. The research has focused on estimating naturally fractured reservoir properties from seismic data, predicting fracture characteristics from well logs, and developing a naturally fractured reservoir simulator. It is important to develop techniques that can be applied to estimate the important parameters in predicting the performance of naturally fractured reservoirs. This project proposes a method to relate seismic properties to the elastic compliance and permeability of the reservoir based upon a sugar cube model. In addition, methods are presented to use conventional well logs to estimate localized fracture information for reservoir characterization purposes. The ability to estimate fracture information from conventional well logs is very important in older wells where data are often limited. Finally, a desktop naturally fractured reservoir simulator has been developed for the purpose of predicting the performance of these complex reservoirs. The simulator incorporates vertical and horizontal wellbore models, methods to handle matrix to fracture fluid transfer, and fracture permeability tensors. This research project has developed methods to characterize and study the performance of naturally fractured reservoirs that integrate geoscience and engineering data. This is an important step in developing exploitation strategies for

  18. The need for novel model order reduction techniques in the electronics industry (Chapter 1)

    NARCIS (Netherlands)

    Schilders, W.H.A.; Benner, P.; Hinze, M.; Maten, ter E.J.W.

    2011-01-01

    In this paper, we discuss the present and future needs of the electronics industry with regard to model order reduction. The industry has always been one of the main motivating fields for the development of MOR techniques, and continues to play this role. We discuss the search for provably passive

  19. Minor isotope safeguards techniques (MIST): Analysis and visualization of gas centrifuge enrichment plant process data using the MSTAR model

    Science.gov (United States)

    Shephard, Adam M.; Thomas, Benjamin R.; Coble, Jamie B.; Wood, Houston G.

    2018-05-01

    This paper presents a development related to the use of minor isotope safeguards techniques (MIST) and the MSTAR cascade model as it relates to the application of international nuclear safeguards at gas centrifuge enrichment plants (GCEPs). The product of this paper is a derivation of the universal and dimensionless MSTAR cascade model. The new model can be used to calculate the minor uranium isotope concentrations in GCEP product and tails streams or to analyze, visualize, and interpret GCEP process data as part of MIST. Applications of the new model include the detection of undeclared feed and withdrawal streams at GCEPs when used in conjunction with UF6 sampling and/or other isotopic measurement techniques.

  20. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  1. Fault Tree Analysis with Temporal Gates and Model Checking Technique for Qualitative System Safety Analysis

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2010-01-01

    Fault tree analysis (FTA) has suffered from several drawbacks such that it uses only static gates and hence can not capture dynamic behaviors of the complex system precisely, and it is in lack of rigorous semantics, and reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and time-consuming for the complex systems while it has been one of the most widely used safety analysis technique in nuclear industry. Although several attempts have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA

  2. Review of the phenomenon of fluidization and its numerical modelling techniques

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-10-01

    Full Text Available The paper introduces the phenomenon of fluidization as a process. Fluidization occurs when a fluid (liquid or gas is pushed upwards through a bed of granular material. This may make the granular material to behave like a liquid and, for example, keep a level meniscus on a tilted container, or make a lighter object float on top and a heavier object sink to the bottom. The behavior of the granular material, when fluidized, depends on the superficial gas velocity, particle size, particle density, and fluid properties resulting in various regimes of fluidization. These regimes are discussed in detail in the paper. This paper also discusses the application of fluidized beds from its early usage in the Winkler coal gasifier to more recent applications for manufacturing of carbon nano-tubes. In addition, Geldart grouping based on the range of particle sizes is discussed. The minimum fluidization condition is defined and it is demonstrated that it may be registered slightly different when particles are being fluidized or de-fluidized. The paper presents discussion on three numerical modelling techniques: the two fluid model, unresolved fluid-particle model and resolved fluid particle model. The two fluid model is often referred to Eulerian-Eulerian method of solution and assumes particles as well as fluid as continuum. The unresolved and resolved fluid-particle models are based on Eulerian-Lagrangian method of solution. The key difference between them is the whether to use a drag correlation or solve the boundary layer around the particles. The paper ends with the discussion on the applicability of these models.

  3. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    OpenAIRE

    Ma, Yuanyuan; Yang, Yi; Mai, Xiaoping; Qiu, Chongjian; Long, Xiao; Wang, Chenghai

    2016-01-01

    To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and s...

  4. Analysis and optimization of a proton exchange membrane fuel cell using modeling techniques

    International Nuclear Information System (INIS)

    Torre Valdés, Ing. Raciel de la; García Parra, MSc. Lázaro Roger; González Rodríguez, MSc. Daniel

    2015-01-01

    This paper proposes a three-dimensional, non-isothermal and steady-state model of Proton Exchange Membrane Fuel Cell using Computational Fluid Dynamic techniques, specifically ANSYS FLUENT 14.5. It's considered multicomponent diffusion and two-phasic flow. The model was compared with experimental published data and with another model. The operation parameters: reactants pressure and temperature, gases flow direction, gas diffusion layer and catalyst layer porosity, reactants humidification and oxygen concentration are analyzed. The model allows the fuel cell design optimization taking in consideration the channels dimensions, the channels length and the membrane thickness. Furthermore, fuel cell performance is analyzed working with SPEEK membrane, an alternative electrolyte to Nafion. In order to carry on membrane material study, it's necessary to modify the expression that describes the electrolyte ionic conductivity. It's found that the device performance has got a great sensibility to pressure, temperature, reactant humidification and oxygen concentration variations. (author)

  5. Dynamical symmetry breaking in the Jackiw-Johnson model and the gauge technique

    International Nuclear Information System (INIS)

    Singh, J.P.

    1984-01-01

    The Jackiw-Johnson model of dynamical gauge symmetry breaking has been re-examined in the light of the gauge technique. In the limit where the ratio of the axial to vector coupling constants becomes small, or, consistently, in the limit where the ratio of the axial-vector-boson mass to the fermion mass becomes small, an approximate solution for the fermion spectral function has been derived. This gives an extremely small ratio of the axial-vector-boson mass to the fermion mass. (author)

  6. The Development and Application of Reactive Transport Modeling Techniques to Study Radionuclide Migration at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    Hari Selvi Viswanathan

    1999-01-01

    Yucca Mountain, Nevada has been chosen as a possible site for the first high level radioactive waste repository in the United States. As part of the site investigation studies, we need to make scientifically rigorous estimations of radionuclide migration in the event of a repository breach. Performance assessment models used to make these estimations are computationally intensive. We have developed two reactive transport modeling techniques to simulate radionuclide transport at Yucca Mountain: (1) the selective coupling approach applied to the convection-dispersion-reaction (CDR) model and (2) a reactive stream tube approach (RST). These models were designed to capture the important processes that influence radionuclide migration while being computationally efficient. The conventional method of modeling reactive transport models is to solve a coupled set of multi-dimensional partial differential equations for the relevant chemical components in the system. We have developed an iterative solution technique, denoted the selective coupling method, that represents a versatile alternative to traditional uncoupled iterative techniques and the filly coupled global implicit method. We show that selective coupling results in computational and memory savings relative to these approaches. We develop RST as an alternative to the CDR method for solving large two- or three-dimensional reactive transport simulations for cases in which one is interested in predicting the flux across a specific control plane. In the RST method, the multidimensional problem is reduced to a series of one-dimensional transport simulations along streamlines. The key assumption with RST is that mixing at the control plane approximates the transverse dispersion between streamlines. We compare the CDR and RST approaches for several scenarios that are relevant to the Yucca Mountain Project. For example, we apply the CDR and RST approaches to model an ongoing field experiment called the Unsaturated Zone

  7. Managerial Techniques in Educational Administration.

    Science.gov (United States)

    Lane, John J.

    1983-01-01

    Management techniques developed during the past 20 years assume the rational bureaucratic model. School administration requires contingent techniques. Quality Circle, Theory Z, and the McKenzie 7-Framework are discussed as techniques to increase school productivity. (MD)

  8. Application of modelling techniques in the food industry: determination of shelf-life for chilled foods

    NARCIS (Netherlands)

    Membré, J.M.; Johnston, M.D.; Bassett, J.; Naaktgeboren, G.; Blackburn, W.; Gorris, L.G.M.

    2005-01-01

    Microbiological modelling techniques (predictive microbiology, the Bayesian Markov Chain Monte Carlo method and a probability risk assessment approach) were combined to assess the shelf-life of an in-pack heat-treated, low-acid sauce intended to be marketed under chilled conditions. From a safety

  9. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  10. Quantification of intervertebral displacement with a novel MRI-based modeling technique: Assessing measurement bias and reliability with a porcine spine model.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Goubeaux, Craig; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian C

    2017-05-01

    The purpose of this study was to develop a novel magnetic resonance imaging (MRI)-based modeling technique for measuring intervertebral displacements. Here, we present the measurement bias and reliability of the developmental work using a porcine spine model. Porcine lumbar vertebral segments were fitted in a custom-built apparatus placed within an externally calibrated imaging volume of an open-MRI scanner. The apparatus allowed movement of the vertebrae through pre-assigned magnitudes of sagittal and coronal translation and rotation. The induced displacements were imaged with static (T 1 ) and fast dynamic (2D HYCE S) pulse sequences. These images were imported into animation software, in which these images formed a background 'scene'. Three-dimensional models of vertebrae were created using static axial scans from the specimen and then transferred into the animation environment. In the animation environment, the user manually moved the models (rotoscoping) to perform model-to-'scene' matching to fit the models to their image silhouettes and assigned anatomical joint axes to the motion-segments. The animation protocol quantified the experimental translation and rotation displacements between the vertebral models. Accuracy of the technique was calculated as 'bias' using a linear mixed effects model, average percentage error and root mean square errors. Between-session reliability was examined by computing intra-class correlation coefficients (ICC) and the coefficient of variations (CV). For translation trials, a constant bias (β 0 ) of 0.35 (±0.11) mm was detected for the 2D HYCE S sequence (p=0.01). The model did not demonstrate significant additional bias with each mm increase in experimental translation (β 1 Displacement=0.01mm; p=0.69). Using the T 1 sequence for the same assessments did not significantly change the bias (p>0.05). ICC values for the T 1 and 2D HYCE S pulse sequences were 0.98 and 0.97, respectively. For rotation trials, a constant bias (

  11. A Temporal Millimeter Wave Propagation Model for Tunnels Using Ray Frustum Techniques and FFT

    Directory of Open Access Journals (Sweden)

    Choonghyen Kwon

    2014-01-01

    Full Text Available A temporal millimeter wave propagation model for tunnels is presented using ray frustum techniques and fast Fourier transform (FFT. To directly estimate or simulate effects of millimeter wave channel properties on the performance of communication services, time domain impulse responses of demodulated signals should be obtained, which needs rather large computation time. To mitigate the computational burden, ray frustum techniques are used to obtain frequency domain transfer function of millimeter wave propagation environment and FFT of equivalent low pass signals are used to retrieve demodulated waveforms. This approach is numerically efficient and helps to directly estimate impact of tunnel structures and surfaces roughness on the performance of millimeter wave communication services.

  12. Statistical techniques for modeling extreme price dynamics in the energy market

    International Nuclear Information System (INIS)

    Mbugua, L N; Mwita, P N

    2013-01-01

    Extreme events have large impact throughout the span of engineering, science and economics. This is because extreme events often lead to failure and losses due to the nature unobservable of extra ordinary occurrences. In this context this paper focuses on appropriate statistical methods relating to a combination of quantile regression approach and extreme value theory to model the excesses. This plays a vital role in risk management. Locally, nonparametric quantile regression is used, a method that is flexible and best suited when one knows little about the functional forms of the object being estimated. The conditions are derived in order to estimate the extreme value distribution function. The threshold model of extreme values is used to circumvent the lack of adequate observation problem at the tail of the distribution function. The application of a selection of these techniques is demonstrated on the volatile fuel market. The results indicate that the method used can extract maximum possible reliable information from the data. The key attraction of this method is that it offers a set of ready made approaches to the most difficult problem of risk modeling.

  13. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  14. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-10-01

    Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a

  15. Lattice Boltzmann flow simulations with applications of reduced order modeling techniques

    KAUST Repository

    Brown, Donald

    2014-01-01

    With the recent interest in shale gas, an understanding of the flow mechanisms at the pore scale and beyond is necessary, which has attracted a lot of interest from both industry and academia. One of the suggested algorithms to help understand flow in such reservoirs is the Lattice Boltzmann Method (LBM). The primary advantage of LBM is its ability to approximate complicated geometries with simple algorithmic modificatoins. In this work, we use LBM to simulate the flow in a porous medium. More specifically, we use LBM to simulate a Brinkman type flow. The Brinkman law allows us to integrate fast free-flow and slow-flow porous regions. However, due to the many scales involved and complex heterogeneities of the rock microstructure, the simulation times can be long, even with the speed advantage of using an explicit time stepping method. The problem is two-fold, the computational grid must be able to resolve all scales and the calculation requires a steady state solution implying a large number of timesteps. To help reduce the computational complexity and total simulation times, we use model reduction techniques to reduce the dimension of the system. In this approach, we are able to describe the dynamics of the flow by using a lower dimensional subspace. In this work, we utilize the Proper Orthogonal Decomposition (POD) technique, to compute the dominant modes of the flow and project the solution onto them (a lower dimensional subspace) to arrive at an approximation of the full system at a lowered computational cost. We present a few proof-of-concept examples of the flow field and the corresponding reduced model flow field.

  16. Mathematical Model and Artificial Intelligent Techniques Applied to a Milk Industry through DSM

    Science.gov (United States)

    Babu, P. Ravi; Divya, V. P. Sree

    2011-08-01

    The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.

  17. Characterization and modelling techniques for gas metal arc welding of DP 600 sheet steels

    Energy Technology Data Exchange (ETDEWEB)

    Mukherjee, K.; Prahl, U.; Bleck, W. [RWTH Aachen University, Department of Ferrous Metallurgy (IEHK) (Germany); Reisgen, U.; Schleser, M.; Abdurakhmanov, A. [RWTH Aachen University, Welding and Joining Institute (ISF) (Germany)

    2010-11-15

    The objectives of the present work are to characterize the Gas Metal Arc Welding process of DP 600 sheet steel and to summarize the modelling techniques. The time-temperature evolution during the welding cycle was measured experimentally and modelled with the softwaretool SimWeld. To model the phase transformations during the welding cycle dilatometer tests were done to quantify the parameters for phase field modelling by MICRESS {sup registered}. The important input parameters are interface mobility, nucleation density, etc. A contribution was made to include austenite to bainite transformation in MICRESS {sup registered}. This is useful to predict the microstructure in the fast cooling segments. The phase transformation model is capable to predict the microstructure along the heating and cooling cycles of welding. Tensile tests have shown the evidence of failure at the heat affected zone, which has the ferrite-tempered martensite microstructure. (orig.)

  18. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  19. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses

    NARCIS (Netherlands)

    Kuiper, Rebecca M.; Nederhoff, Tim; Klugkist, Irene

    2015-01-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is

  20. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    Science.gov (United States)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  1. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  2. A predictive model for dysphagia following IMRT for head and neck cancer: Introduction of the EMLasso technique

    International Nuclear Information System (INIS)

    Kim, De Ruyck; Duprez, Fréderic; Werbrouck, Joke; Sabbe, Nick; Sofie, De Langhe; Boterberg, Tom; Madani, Indira; Thas, Olivier; Wilfried, De Neve; Thierens, Hubert

    2013-01-01

    Background and purpose: Design a model for prediction of acute dysphagia following intensity-modulated radiotherapy (IMRT) for head and neck cancer. Illustrate the use of the EMLasso technique for model selection. Material and methods: Radiation-induced dysphagia was scored using CTCAE v.3.0 in 189 head and neck cancer patients. Clinical data (gender, age, nicotine and alcohol use, diabetes, tumor location), treatment parameters (chemotherapy, surgery involving the primary tumor, lymph node dissection, overall treatment time), dosimetric parameters (doses delivered to pharyngeal constrictor (PC) muscles and esophagus) and 19 genetic polymorphisms were used in model building. The predicting model was achieved by EMLasso, i.e. an EM algorithm to account for missing values, applied to penalized logistic regression, which allows for variable selection by tuning the penalization parameter through crossvalidation on AUC, thus avoiding overfitting. Results: Fifty-three patients (28%) developed acute ⩾ grade 3 dysphagia. The final model has an AUC of 0.71 and contains concurrent chemotherapy, D 2 to the superior PC and the rs3213245 (XRCC1) polymorphism. The model’s false negative rate and false positive rate in the optimal operation point on the ROC curve are 21% and 49%, respectively. Conclusions: This study demonstrated the utility of the EMLasso technique for model selection in predictive radiogenetics

  3. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    Science.gov (United States)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by

  4. Analytical system availability techniques

    NARCIS (Netherlands)

    Brouwers, J.J.H.; Verbeek, P.H.J.; Thomson, W.R.

    1987-01-01

    Analytical techniques are presented to assess the probability distributions and related statistical parameters of loss of production from equipment networks subject to random failures and repairs. The techniques are based on a theoretical model for system availability, which was further developed

  5. Effect of Load Model Using Ranking Identification Technique for Multi Type DG Incorporating Embedded Meta EP-Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Abdul Rahim Siti Rafidah

    2018-01-01

    Full Text Available This paper presents the effect of load model prior to the distributed generation (DG planning in distribution system. In achieving optimal allocation and placement of DG, a ranking identification technique was proposed in order to study the DG planning using pre-developed Embedded Meta Evolutionary Programming–Firefly Algorithm. The aim of this study is to analyze the effect of different type of DG in order to reduce the total losses considering load factor. To realize the effectiveness of the proposed technique, the IEEE 33 bus test systems was utilized as the test specimen. In this study, the proposed techniques were used to determine the DG sizing and the suitable location for DG planning. The results produced are utilized for the optimization process of DG for the benefit of power system operators and planners in the utility. The power system planner can choose the suitable size and location from the result obtained in this study with the appropriate company’s budget. The modeling of voltage dependent loads has been presented and the results show the voltage dependent load models have a significant effect on total losses of a distribution system for different DG type.

  6. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  7. Structural and geochemical techniques for the hydrogeological characterisation and stochastic modelling of fractured media

    International Nuclear Information System (INIS)

    Vela, A.; Elorza, F.J.; Florez, F.; Paredes, C.; Mazadiego, L.; Llamas, J.F.; Perez, E.; Vives, L.; Carrera, J.; Munoz, A.; De Vicente, G.; Casquet, C.

    1999-01-01

    Safety analysis of radioactive waste storage systems require fractured rock studies. The performance assessment studies of this type of problems include the development of radionuclide flow and transport models to predict the evolution of possible contaminants released from the repository to the biosphere. The methodology developed in the HIDROBAP project and some results obtained with its application in El Berrocal granite batholith are presented. It integrates modern tools belonging to different disciplines. A Discrete Fracture Network model (DFT) was selected to simulate the fractured medium and a 3D finite element flow and transport model that includes the inverse problem techniques has been coupled to the DFT model to simulate the water movement trough the fracture network system. Preliminary results show that this integrated methodology can be very useful for the hydrogeological characterisation of rock fractured media. (author)

  8. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    Science.gov (United States)

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  9. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...

  10. Efficiency assessment of runoff harvesting techniques using a 3D coupled surface-subsurface hydrological model

    International Nuclear Information System (INIS)

    Verbist, K.; Cronelis, W. M.; McLaren, R.; Gabriels, D.; Soto, G.

    2009-01-01

    In arid and semi-arid zones runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Both in literature and in the field, a large variety of runoff collecting systems are found, as well as large variations in design and dimensions. Therefore, detailed measurements were performed on a semi-arid slope in central Chile to allow identification of the effect of a simple water harvesting technique on soil water availability. For this purpose, twenty two TDR-probes were installed and were monitored continuously during and after a simulated rainfall event. These data were used to calibrate the 3D distributed flow model HydroGeoSphere, to assess the runoff components and soil water retention as influenced by the water harvesting technique, both under simulated and natural rainfall conditions. (Author) 6 refs.

  11. The utilization of cranial models created using rapid prototyping techniques in the development of models for navigation training.

    Science.gov (United States)

    Waran, V; Pancharatnam, Devaraj; Thambinayagam, Hari Chandran; Raman, Rajagopal; Rathinam, Alwin Kumar; Balakrishnan, Yuwaraj Kumar; Tung, Tan Su; Rahman, Z A

    2014-01-01

    Navigation in neurosurgery has expanded rapidly; however, suitable models to train end users to use the myriad software and hardware that come with these systems are lacking. Utilizing three-dimensional (3D) industrial rapid prototyping processes, we have been able to create models using actual computed tomography (CT) data from patients with pathology and use these models to simulate a variety of commonly performed neurosurgical procedures with navigation systems. To assess the possibility of utilizing models created from CT scan dataset obtained from patients with cranial pathology to simulate common neurosurgical procedures using navigation systems. Three patients with pathology were selected (hydrocephalus, right frontal cortical lesion, and midline clival meningioma). CT scan data following an image-guidance surgery protocol in DIACOM format and a Rapid Prototyping Machine were taken to create the necessary printed model with the corresponding pathology embedded. The ability in registration, planning, and navigation of two navigation systems using a variety of software and hardware provided by these platforms was assessed. We were able to register all models accurately using both navigation systems and perform the necessary simulations as planned. Models with pathology utilizing 3D rapid prototyping techniques accurately reflect data of actual patients and can be used in the simulation of neurosurgical operations using navigation systems. Georg Thieme Verlag KG Stuttgart · New York.

  12. Impact of advanced microstructural characterization techniques on modeling and analysis of radiation damage

    International Nuclear Information System (INIS)

    Garner, F.A.; Odette, G.R.

    1980-01-01

    The evolution of radiation-induced alterations of dimensional and mechanical properties has been shown to be a direct and often predictable consequence of radiation-induced microstructural changes. Recent advances in understanding of the nature and role of each microstructural component in determining the property of interest has led to a reappraisal of the type and priority of data needed for further model development. This paper presents an overview of the types of modeling and analysis activities in progress, the insights that prompted these activities, and specific examples of successful and ongoing efforts. A review is presented of some problem areas that in the authors' opinion are not yet receiving sufficient attention and which may benefit from the application of advanced techniques of microstructural characterization. Guidelines based on experience gained in previous studies are also provided for acquisition of data in a form most applicable to modeling needs

  13. Earthquake induced rock shear through a deposition hole. Modelling of three model tests scaled 1:10. Verification of the bentonite material model and the calculation technique

    Energy Technology Data Exchange (ETDEWEB)

    Boergesson, Lennart (Clay Technology AB, Lund (Sweden)); Hernelind, Jan (5T Engineering AB, Vaesteraas (Sweden))

    2010-11-15

    Three model shear tests of very high quality simulating a horizontal rock shear through a deposition hole in the centre of a canister were performed 1986. The tests and the results are described by /Boergesson 1986/. The tests simulated a deposition hole in the scale 1:10 with reference density of the buffer, very stiff confinement simulating the rock, and a solid bar of copper simulating the canister. The three tests were almost identical with exception of the rate of shear, which was varied between 0.031 and 160 mm/s, i.e. with a factor of more than 5,000 and the density of the bentonite, which differed slightly. The tests were very well documented. Shear force, shear rate, total stress in the bentonite, strain in the copper and the movement of the top of the simulated canister were measured continuously during the shear. After finished shear the equipment was dismantled and careful sampling of the bentonite with measurement of water ratio and density were made. The deformed copper 'canister' was also carefully measured after the test. The tests have been modelled with the finite element code Abaqus with the same models and techniques that were used for the full scale scenarios in SR-Site. The results have been compared with the measured results, which has yielded very valuable information about the relevancy of the material models and the modelling technique. An elastic-plastic material model was used for the bentonite where the stress-strain relations have been derived from laboratory tests. The material model is made a function of both the density and the strain rate at shear. Since the shear is fast and takes place under undrained conditions, the density is not changed during the tests. However, strain rate varies largely with both the location of the elements and time. This can be taken into account in Abaqus by making the material model a function of the strain rate for each element. A similar model, based on tensile tests on the copper used in

  14. A model for teaching and learning spinal thrust manipulation and its effect on participant confidence in technique performance.

    Science.gov (United States)

    Wise, Christopher H; Schenk, Ronald J; Lattanzi, Jill Black

    2016-07-01

    Despite emerging evidence to support the use of high velocity thrust manipulation in the management of lumbar spinal conditions, utilization of thrust manipulation among clinicians remains relatively low. One reason for the underutilization of these procedures may be related to disparity in training in the performance of these techniques at the professional and post professional levels. To assess the effect of using a new model of active learning on participant confidence in the performance of spinal thrust manipulation and the implications for its use in the professional and post-professional training of physical therapists. A cohort of 15 DPT students in their final semester of entry-level professional training participated in an active training session emphasizing a sequential partial task practice (SPTP) strategy in which participants engaged in partial task practice over several repetitions with different partners. Participants' level of confidence in the performance of these techniques was determined through comparison of pre- and post-training session surveys and a post-session open-ended interview. The increase in scores across all items of the individual pre- and post-session surveys suggests that this model was effective in changing overall participant perception regarding the effectiveness and safety of these techniques and in increasing student confidence in their performance. Interviews revealed that participants greatly preferred the SPTP strategy, which enhanced their confidence in technique performance. Results indicate that this new model of psychomotor training may be effective at improving confidence in the performance of spinal thrust manipulation and, subsequently, may be useful for encouraging the future use of these techniques in the care of individuals with impairments of the spine. Inasmuch, this method of instruction may be useful for training of physical therapists at both the professional and post-professional levels.

  15. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  16. Pressure Measurement Techniques for Abdominal Hypertension: Conclusions from an Experimental Model.

    Science.gov (United States)

    Chopra, Sascha Santosh; Wolf, Stefan; Rohde, Veit; Freimann, Florian Baptist

    2015-01-01

    Introduction. Intra-abdominal pressure (IAP) measurement is an indispensable tool for the diagnosis of abdominal hypertension. Different techniques have been described in the literature and applied in the clinical setting. Methods. A porcine model was created to simulate an abdominal compartment syndrome ranging from baseline IAP to 30 mmHg. Three different measurement techniques were applied, comprising telemetric piezoresistive probes at two different sites (epigastric and pelvic) for direct pressure measurement and intragastric and intravesical probes for indirect measurement. Results. The mean difference between the invasive IAP measurements using telemetric pressure probes and the IVP measurements was -0.58 mmHg. The bias between the invasive IAP measurements and the IGP measurements was 3.8 mmHg. Compared to the realistic results of the intraperitoneal and intravesical measurements, the intragastric data showed a strong tendency towards decreased values. The hydrostatic character of the IAP was eliminated at high-pressure levels. Conclusion. We conclude that intragastric pressure measurement is potentially hazardous and might lead to inaccurately low intra-abdominal pressure values. This may result in missed diagnosis of elevated abdominal pressure or even ACS. The intravesical measurements showed the most accurate values during baseline pressure and both high-pressure plateaus.

  17. Identifying and prioritizing the tools/techniques of knowledge management based on the Asian Productivity Organization Model (APO) to use in hospitals.

    Science.gov (United States)

    Khajouei, Hamid; Khajouei, Reza

    2017-12-01

    Appropriate knowledge, correct information, and relevant data are vital in medical diagnosis and treatment systems. Knowledge Management (KM) through its tools/techniques provides a pertinent framework for decision-making in healthcare systems. The objective of this study was to identify and prioritize the KM tools/techniques that apply to hospital setting. This is a descriptive-survey study. Data were collected using a -researcher-made questionnaire that was developed based on experts' opinions to select the appropriate tools/techniques from 26 tools/techniques of the Asian Productivity Organization (APO) model. Questions were categorized into five steps of KM (identifying, creating, storing, sharing, and applying the knowledge) according to this model. The study population consisted of middle and senior managers of hospitals and managing directors of Vice-Chancellor for Curative Affairs in Kerman University of Medical Sciences in Kerman, Iran. The data were analyzed in SPSS v.19 using one-sample t-test. Twelve out of 26 tools/techniques of the APO model were identified as the tools applicable in hospitals. "Knowledge café" and "APO knowledge management assessment tool" with respective means of 4.23 and 3.7 were the most and the least applicable tools in the knowledge identification step. "Mentor-mentee scheme", as well as "voice and Voice over Internet Protocol (VOIP)" with respective means of 4.20 and 3.52 were the most and the least applicable tools/techniques in the knowledge creation step. "Knowledge café" and "voice and VOIP" with respective means of 3.85 and 3.42 were the most and the least applicable tools/techniques in the knowledge storage step. "Peer assist and 'voice and VOIP' with respective means of 4.14 and 3.38 were the most and the least applicable tools/techniques in the knowledge sharing step. Finally, "knowledge worker competency plan" and "knowledge portal" with respective means of 4.38 and 3.85 were the most and the least applicable tools/techniques

  18. Technique Selectively Represses Immune System

    Science.gov (United States)

    ... Research Matters December 3, 2012 Technique Selectively Represses Immune System Myelin (green) encases and protects nerve fibers (brown). A new technique prevents the immune system from attacking myelin in a mouse model of ...

  19. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

    Directory of Open Access Journals (Sweden)

    Shutang Zhu

    2008-01-01

    Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

  20. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  1. Cleanup techniques for Finnish urban environments and external doses from 137Cs - modelling and calculations

    International Nuclear Information System (INIS)

    Moring, M.; Markkula, M.L.

    1997-03-01

    The external doses under various radioactive deposition conditions are assessed and the efficiencies of some simple decontamination techniques (grass cutting, vacuum sweeping, hosing of paved surfaces and roofs, and felling trees) are compared in the study. The present model has been constructed for the Finnish conditions and housing areas, using 137 Cs transfer data from the Nordic and Central European studies and models. The compartment model concerns behaviour and decontamination of 137 Cs in the urban environment under summer conditions. Doses to man have been calculated for wet (light rain) and dry deposition in four typical Finnish building areas: single-family wooden houses, brick terraced-houses, blocks of flats and urban office buildings. (26 refs.)

  2. Establishment of SHG-44 human glioma model in brain of wistar rat with stereotactic technique

    International Nuclear Information System (INIS)

    Hong Xinyu; Luo Yi'nan; Fu Shuanglin; Wang Zhanfeng; Bie Li; Cui Jiale

    2004-01-01

    Objective: To establish solid intracerebral human glioma model in Wistar rat with xenograft methods. Methods: The SHG-44 cells were injected into brain right caudate nucleus of previous immuno-inhibitory Wistar rats with stereotactic technique. The MRI scans were performed at 1 week and 2 weeks later after implantation. After 2 weeks the rats were killed and pathological examination and immunohistologic stain for human GFAP were used. Results: The MRI scan after 1 week of implantation showed the glioma was growing, pathological histochemical examination demonstrated the tumor was glioma. Human GFAP stain was positive. The growth rate of glioma model was about 60%. Conclusion: Solid intracerebral human glioma model in previous immuno-inhibitory Wistar rat is successfully established

  3. VLBI FOR GRAVITY PROBE B. IV. A NEW ASTROMETRIC ANALYSIS TECHNIQUE AND A COMPARISON WITH RESULTS FROM OTHER TECHNIQUES

    International Nuclear Information System (INIS)

    Lebach, D. E.; Ratner, M. I.; Shapiro, I. I.; Bartel, N.; Bietenholz, M. F.; Lederman, J. I.; Ransom, R. R.; Campbell, R. M.; Gordon, D.; Lestrade, J.-F.

    2012-01-01

    When very long baseline interferometry (VLBI) observations are used to determine the position or motion of a radio source relative to reference sources nearby on the sky, the astrometric information is usually obtained via (1) phase-referenced maps or (2) parametric model fits to measured fringe phases or multiband delays. In this paper, we describe a 'merged' analysis technique which combines some of the most important advantages of these other two approaches. In particular, our merged technique combines the superior model-correction capabilities of parametric model fits with the ability of phase-referenced maps to yield astrometric measurements of sources that are too weak to be used in parametric model fits. We compare the results from this merged technique with the results from phase-referenced maps and from parametric model fits in the analysis of astrometric VLBI observations of the radio-bright star IM Pegasi (HR 8703) and the radio source B2252+172 nearby on the sky. In these studies we use central-core components of radio sources 3C 454.3 and B2250+194 as our positional references. We obtain astrometric results for IM Peg with our merged technique even when the source is too weak to be used in parametric model fits, and we find that our merged technique yields astrometric results superior to the phase-referenced mapping technique. We used our merged technique to estimate the proper motion and other astrometric parameters of IM Peg in support of the NASA/Stanford Gravity Probe B mission.

  4. Current control design for three-phase grid-connected inverters using a pole placement technique based on numerical models

    OpenAIRE

    Citro, Costantino; Gavriluta, Catalin; Nizak Md, H. K.; Beltran, H.

    2012-01-01

    This paper presents a design procedure for linear current controllers of three-phase grid-connected inverters. The proposed method consists in deriving a numerical model of the converter by using software simulations and applying the pole placement technique to design the controller with the desired performances. A clear example on how to apply the technique is provided. The effectiveness of the proposed design procedure has been verified through the experimental results obtained with ...

  5. Comparison of Parameter Identification Techniques

    Directory of Open Access Journals (Sweden)

    Eder Rafael

    2016-01-01

    Full Text Available Model-based control of mechatronic systems requires excellent knowledge about the physical behavior of each component. For several types of components of a system, e.g. mechanical or electrical ones, the dynamic behavior can be described by means of a mathematic model consisting of a set of differential equations, difference equations and/or algebraic constraint equations. The knowledge of a realistic mathematic model and its parameter values is essential to represent the behaviour of a mechatronic system. Frequently it is hard or impossible to obtain all required values of the model parameters from the producer, so an appropriate parameter estimation technique is required to compute missing parameters. A manifold of parameter identification techniques can be found in the literature, but their suitability depends on the mathematic model. Previous work dealt with the automatic assembly of mathematical models of serial and parallel robots with drives and controllers within the dynamic multibody simulation code HOTINT as fully-fledged mechatronic simulation. Several parameters of such robot models were identified successfully by our embedded algorithm. The present work proposes an improved version of the identification algorithm with higher performance. The quality of the identified parameter values and the computation effort are compared with another standard technique.

  6. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  7. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  8. PID feedback controller used as a tactical asset allocation technique: The G.A.M. model

    Science.gov (United States)

    Gandolfi, G.; Sabatini, A.; Rossolini, M.

    2007-09-01

    The objective of this paper is to illustrate a tactical asset allocation technique utilizing the PID controller. The proportional-integral-derivative (PID) controller is widely applied in most industrial processes; it has been successfully used for over 50 years and it is used by more than 95% of the plants processes. It is a robust and easily understood algorithm that can provide excellent control performance in spite of the diverse dynamic characteristics of the process plant. In finance, the process plant, controlled by the PID controller, can be represented by financial market assets forming a portfolio. More specifically, in the present work, the plant is represented by a risk-adjusted return variable. Money and portfolio managers’ main target is to achieve a relevant risk-adjusted return in their managing activities. In literature and in the financial industry business, numerous kinds of return/risk ratios are commonly studied and used. The aim of this work is to perform a tactical asset allocation technique consisting in the optimization of risk adjusted return by means of asset allocation methodologies based on the PID model-free feedback control modeling procedure. The process plant does not need to be mathematically modeled: the PID control action lies in altering the portfolio asset weights, according to the PID algorithm and its parameters, Ziegler-and-Nichols-tuned, in order to approach the desired portfolio risk-adjusted return efficiently.

  9. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Naets, Frank

    2018-01-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system...... by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we...

  10. Comparison of a new expert elicitation model with the Classical Model, equal weights and single experts, using a cross-validation technique

    Energy Technology Data Exchange (ETDEWEB)

    Flandoli, F. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Giorgi, E. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy); Aspinall, W.P. [Dept. of Earth Sciences, University of Bristol, and Aspinall and Associates, Tisbury (United Kingdom); Neri, A., E-mail: neri@pi.ingv.it [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy)

    2011-10-15

    The problem of ranking and weighting experts' performances when quantitative judgments are being elicited for decision support is considered. A new scoring model, the Expected Relative Frequency model, is presented, based on the closeness between central values provided by the expert and known values used for calibration. Using responses from experts in five different elicitation datasets, a cross-validation technique is used to compare this new approach with the Cooke Classical Model, the Equal Weights model, and individual experts. The analysis is performed using alternative reward schemes designed to capture proficiency either in quantifying uncertainty, or in estimating true central values. Results show that although there is only a limited probability that one approach is consistently better than another, the Cooke Classical Model is generally the most suitable for assessing uncertainties, whereas the new ERF model should be preferred if the goal is central value estimation accuracy. - Highlights: > A new expert elicitation model, named Expected Relative Frequency (ERF), is presented. > A cross-validation approach to evaluate the performance of different elicitation models is applied. > The new ERF model shows the best performance with respect to the point-wise estimates.

  11. An analysis of supersonic flows with low-Reynolds number compressible two-equation turbulence models using LU finite volume implicit numerical techniques

    Science.gov (United States)

    Lee, J.

    1994-01-01

    A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.

  12. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ma

    2016-01-01

    Full Text Available To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and spectral nudging are debated. Moreover, dynamical downscaling is now performed at the convection-permitting scale to reduce the parameterization uncertainty and obtain the finer resolution. To compare the performances of the two nudging techniques in this study, three sensitivity experiments (with no nudging, analysis nudging, and spectral nudging covering a period of two months with a grid spacing of 6 km over continental China are conducted to downscale the 1-degree National Centers for Environmental Prediction (NCEP dataset with the Weather Research and Forecasting (WRF model. Compared with observations, the results show that both of the nudging experiments decrease the bias of conventional meteorological elements near the surface and at different heights during the process of dynamical downscaling. However, spectral nudging outperforms analysis nudging for predicting precipitation, and analysis nudging outperforms spectral nudging for the simulation of air humidity and wind speed.

  13. The technique for 3D printing patient-specific models for auricular reconstruction.

    Science.gov (United States)

    Flores, Roberto L; Liss, Hannah; Raffaelli, Samuel; Humayun, Aiza; Khouri, Kimberly S; Coelho, Paulo G; Witek, Lukasz

    2017-06-01

    Currently, surgeons approach autogenous microtia repair by creating a two-dimensional (2D) tracing of the unaffected ear to approximate a three-dimensional (3D) construct, a difficult process. To address these shortcomings, this study introduces the fabrication of patient-specific, sterilizable 3D printed auricular model for autogenous auricular reconstruction. A high-resolution 3D digital photograph was captured of the patient's unaffected ear and surrounding anatomic structures. The photographs were exported and uploaded into Amira, for transformation into a digital (.stl) model, which was imported into Blender, an open source software platform for digital modification of data. The unaffected auricle as digitally isolated and inverted to render a model for the contralateral side. The depths of the scapha, triangular fossa, and cymba were deepened to accentuate their contours. Extra relief was added to the helical root to further distinguish this structure. The ear was then digitally deconstructed and separated into its individual auricular components for reconstruction. The completed ear and its individual components were 3D printed using polylactic acid filament and sterilized following manufacturer specifications. The sterilized models were brought to the operating room to be utilized by the surgeon. The models allowed for more accurate anatomic measurements compared to 2D tracings, which reduced the degree of estimation required by surgeons. Approximately 20 g of the PLA filament were utilized for the construction of these models, yielding a total material cost of approximately $1. Using the methodology detailed in this report, as well as departmentally available resources (3D digital photography and 3D printing), a sterilizable, patient-specific, and inexpensive 3D auricular model was fabricated to be used intraoperatively. This technique of printing customized-to-patient models for surgeons to use as 'guides' shows great promise. Copyright © 2017 European

  14. Comparison of QuadrapolarTM radiofrequency lesions produced by standard versus modified technique: an experimental model

    Directory of Open Access Journals (Sweden)

    Safakish R

    2017-06-01

    Full Text Available Ramin Safakish Allevio Pain Management Clinic, Toronto, ON, Canada Abstract: Lower back pain (LBP is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI joint pain is responsible for LBP in 18%–30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques. Keywords: lower back pain, radiofrequency ablation, sacroiliac joint, Quadrapolar radiofrequency ablation

  15. Point-source inversion techniques

    Science.gov (United States)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  16. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  17. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  18. Land Cover Mapping Analysis and Urban Growth Modelling Using Remote Sensing Techniques in Greater Cairo Region—Egypt

    Directory of Open Access Journals (Sweden)

    Yasmine Megahed

    2015-09-01

    Full Text Available This study modeled the urban growth in the Greater Cairo Region (GCR, one of the fastest growing mega cities in the world, using remote sensing data and ancillary data. Three land use land cover (LULC maps (1984, 2003 and 2014 were produced from satellite images by using Support Vector Machines (SVM. Then, land cover changes were detected by applying a high level mapping technique that combines binary maps (change/no-change and post classification comparison technique. The spatial and temporal urban growth patterns were analyzed using selected statistical metrics developed in the FRAGSTATS software. Major transitions to urban were modeled to predict the future scenarios for year 2025 using Land Change Modeler (LCM embedded in the IDRISI software. The model results, after validation, indicated that 14% of the vegetation and 4% of the desert in 2014 will be urbanized in 2025. The urban areas within a 5-km buffer around: the Great Pyramids, Islamic Cairo and Al-Baron Palace were calculated, highlighting an intense urbanization especially around the Pyramids; 28% in 2014 up to 40% in 2025. Knowing the current and estimated urbanization situation in GCR will help decision makers to adjust and develop new plans to achieve a sustainable development of urban areas and to protect the historical locations.

  19. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  20. Application of radiosurgical techniques to produce a primate model of brain lesions

    Directory of Open Access Journals (Sweden)

    Jun eKunimatsu

    2015-04-01

    Full Text Available Behavioral analysis of subjects with discrete brain lesions provides important information about the mechanisms of various brain functions. However, it is generally difficult to experimentally produce discrete lesions in deep brain structures. Here we show that a radiosurgical technique, which is used as an alternative treatment for brain tumors and vascular malformations, is applicable to create non-invasive lesions in experimental animals for the research in systems neuroscience. We delivered highly focused radiation (130–150 Gy at ISO center to the frontal eye field of macaque monkeys using a clinical linear accelerator (LINAC. The effects of irradiation were assessed by analyzing oculomotor performance along with magnetic resonance (MR images before and up to 8 months following irradiation. In parallel with tissue edema indicated by MR images, deficits in saccadic and smooth pursuit eye movements were observed during several days following irradiation. Although initial signs of oculomotor deficits disappeared within a month, damage to the tissue and impaired eye movements gradually developed during the course of the subsequent 6 months. Postmortem histological examinations showed necrosis and hemorrhages within a large area of the white matter and, to a lesser extent, in the adjacent gray matter, which was centered at the irradiated target. These results indicated that the LINAC system was useful for making brain lesions in experimental animals, while the suitable radiation parameters to generate more focused lesions need to be further explored. We propose the use of a radiosurgical technique for establishing animal models of brain lesions, and discuss the possible uses of this technique for functional neurosurgical treatments in humans.

  1. Soil temperature modeling at different depths using neuro-fuzzy, neural network, and genetic programming techniques

    Science.gov (United States)

    Kisi, Ozgur; Sanikhani, Hadi; Cobaner, Murat

    2017-08-01

    The applicability of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and genetic programming (GP) techniques in estimating soil temperatures (ST) at different depths is investigated in this study. Weather data from two stations, Mersin and Adana, Turkey, were used as inputs to the applied models in order to model monthly STs. The first part of the study focused on comparison of ANN, ANFIS, and GP models in modeling ST of two stations at the depths of 10, 50, and 100 cm. GP was found to perform better than the ANN and ANFIS-SC in estimating monthly ST. The effect of periodicity (month of the year) on models' accuracy was also investigated. Including periodicity component in models' inputs considerably increased their accuracies. The root mean square error (RMSE) of ANN models was respectively decreased by 34 and 27 % for the depths of 10 and 100 cm adding the periodicity input. In the second part of the study, the accuracies of the ANN, ANFIS, and GP models were compared in estimating ST of Mersin Station using the climatic data of Adana Station. The ANN models generally performed better than the ANFIS-SC and GP in modeling ST of Mersin Station without local climatic inputs.

  2. S/O modeling technique for optimal containment of light hydrocarbons in contaminated unconfined aquifers

    International Nuclear Information System (INIS)

    Cooper, G.S. Jr.; Kaluarachchi, J.J.; Peralta, R.C.

    1993-01-01

    An innovative approach is presented to minimize pumping for immobilizing a floating plume of a light non-aqueous phase liquid (LNAPL). The best pumping strategy is determined to contain the free oil product and provide for gradient control of the water table. This approach combined detailed simulation, statistical analysis, and optimization. This modeling technique uses regression equations that describe system response to variable pumping stimuli. The regression equations were developed from analysis of systematically performed simulations of multiphase flow in an areal region of an unconfined aquifer. Simulations were performed using ARMOS, a finite element model. ARMOS can be used to simulate a spill, leakage from subsurface storage facilities and recovery of hydrocarbons from trenches or pumping wells to design remediation schemes

  3. Computerized modeling techniques predict the 3D structure of H₄R: facts and fiction.

    Science.gov (United States)

    Zaid, Hilal; Ismael-Shanak, Siba; Michaeli, Amit; Rayan, Anwar

    2012-01-01

    The functional characterization of proteins presents a daily challenge r biochemical, medical and computational sciences, especially when the structures are undetermined empirically, as in the case of the Histamine H4 Receptor (H₄R). H₄R is a member of the GPCR superfamily that plays a vital role in immune and inflammatory responses. To date, the concept of GPCRs modeling is highlighted in textbooks and pharmaceutical pamphlets, and this group of proteins has been the subject of almost 3500 publications in the scientific literature. The dynamic nature of determining the GPCRs structure was elucidated through elegant and creative modeling methodologies, implemented by many groups around the world. H₄R which belongs to the GPCR family was cloned in 2000; understandably, its biological activity was reported only 65 times in pubmed. Here we attempt to cover the fundamental concepts of H₄R structure modeling and its implementation in drug discovery, especially those that have been experimentally tested and to highlight some ideas that are currently being discussed on the dynamic nature of H₄R and GPCRs computerized techniques for 3D structure modeling.

  4. Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique

    Science.gov (United States)

    Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.

    1997-01-01

    An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.

  5. A novel technique of serial biopsy in mouse brain tumour models.

    Directory of Open Access Journals (Sweden)

    Sasha Rogers

    Full Text Available Biopsy is often used to investigate brain tumour-specific abnormalities so that treatments can be appropriately tailored. Dacomitinib (PF-00299804 is a tyrosine kinase inhibitor (TKI, which is predicted to only be effective in cancers where the targets of this drug (EGFR, ERBB2, ERBB4 are abnormally active. Here we describe a method by which serial biopsy can be used to validate response to dacomitinib treatment in vivo using a mouse glioblastoma model. In order to determine the feasibility of conducting serial brain biopsies in mouse models with minimal morbidity, and if successful, investigate whether this can facilitate evaluation of chemotherapeutic response, an orthotopic model of glioblastoma was used. Immunodeficient mice received cortical implants of the human glioblastoma cell line, U87MG, modified to express the constitutively-active EGFR mutant, EGFRvIII, GFP and luciferase. Tumour growth was monitored using bioluminescence imaging. Upon attainment of a moderate tumour size, free-hand biopsy was performed on a subgroup of animals. Animal monitoring using a neurological severity score (NSS showed that all mice survived the procedure with minimal perioperative morbidity and recovered to similar levels as controls over a period of five days. The technique was used to evaluate dacomitinib-mediated inhibition of EGFRvIII two hours after drug administration. We show that serial tissue samples can be obtained, that the samples retain histological features of the tumour, and are of sufficient quality to determine response to treatment. This approach represents a significant advance in murine brain surgery that may be applicable to other brain tumour models. Importantly, the methodology has the potential to accelerate the preclinical in vivo drug screening process.

  6. A technique for the geometric modeling of underground surfaces: Nevada Nuclear Waste Storage Investigations Project

    International Nuclear Information System (INIS)

    Williams, R.L.

    1988-03-01

    There is a need within the Nevada Nuclear Waste Storage Investigation (NNWSI) project to develop three-dimensional surface definitions for the subterranean stratigraphies at Yucca Mountain, Nevada. The nature of the data samples available to the project require an interpolation technique that can perform well with sparse and irregularly spaced data. Following an evaluation of the relevant existing methods, a new technique, Multi-Kernel Modulation (MKM), is presented. MKM interpolates sparse and irregularly spaced data by modulating a polynomial trend surface with a linear summation of regular surfaces (kernels). A perspective discussion of MKM, Kriging, and Multiquadric Analysis reveals that MKM has the advantage of simplicity and efficiency when used with sparse samples. An example of the use of MKM to model a complex topography is presented. 24 refs., 6 figs., 2 tabs

  7. Experimental models of brain ischemia: a review of techniques, magnetic resonance imaging and investigational cell-based therapies

    Directory of Open Access Journals (Sweden)

    Alessandra eCanazza

    2014-02-01

    Full Text Available Stroke continues to be a significant cause of death and disability worldwide. Although major advances have been made in the past decades in prevention, treatment and rehabilitation, enormous challenges remain in the way of translating new therapeutic approaches from bench to bedside. Thrombolysis, while routinely used for ischemic stroke, is only a viable option within a narrow time window. Recently, progress in stem cell biology has opened up avenues to therapeutic strategies aimed at supporting and replacing neural cells in infarcted areas. Realistic experimental animal models are crucial to understand the mechanisms of neuronal survival following ischemic brain injury and to develop therapeutic interventions. Current studies on experimental stroke therapies evaluate the efficiency of neuroprotective agents and cell-based approaches using primarily rodent models of permanent or transient focal cerebral ischemia. In parallel, advancements in imaging techniques permit better mapping of the spatial-temporal evolution of the lesioned cortex and its functional responses. This review provides a condensed conceptual review of the state of the art of this field, from models and magnetic resonance imaging techniques through to stem cell therapies.

  8. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  9. Base Oils Biodegradability Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Malika Trabelsi

    2010-02-01

    Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

  10. Quality Attribute Techniques Framework

    Science.gov (United States)

    Chiam, Yin Kia; Zhu, Liming; Staples, Mark

    The quality of software is achieved during its development. Development teams use various techniques to investigate, evaluate and control potential quality problems in their systems. These “Quality Attribute Techniques” target specific product qualities such as safety or security. This paper proposes a framework to capture important characteristics of these techniques. The framework is intended to support process tailoring, by facilitating the selection of techniques for inclusion into process models that target specific product qualities. We use risk management as a theory to accommodate techniques for many product qualities and lifecycle phases. Safety techniques have motivated the framework, and safety and performance techniques have been used to evaluate the framework. The evaluation demonstrates the ability of quality risk management to cover the development lifecycle and to accommodate two different product qualities. We identify advantages and limitations of the framework, and discuss future research on the framework.

  11. Conventional and Piecewise Growth Modeling Techniques: Applications and Implications for Investigating Head Start Children's Early Literacy Learning

    Science.gov (United States)

    Hindman, Annemarie H.; Cromley, Jennifer G.; Skibbe, Lori E.; Miller, Alison L.

    2011-01-01

    This article reviews the mechanics of conventional and piecewise growth models to demonstrate the unique affordances of each technique for examining the nature and predictors of children's early literacy learning during the transition from preschool through first grade. Using the nationally representative Family and Child Experiences Survey…

  12. Efficient cycle jumping techniques for the modelling of materials and structures under cyclic mechanical and thermal loading

    International Nuclear Information System (INIS)

    Dunne, F.P.E.; Hayhurst, D.R.

    1994-01-01

    Highly efficient cycle jumping algorithms have been developed for the calculation of stress and damage histories for both cyclic mechanical and cycle thermal loading. The techniques have been shown to be suitable for cyclic plasticity; creep-cyclic plasticity interaction; and creep dominated material behaviour. The cycle jumping algorithms have been validated by comparison of the predictions made using both the cycle jumping technique, and the full calculation involving the integration of the equations around all cycles. Excellent agreement has been achieved, and significant reductions in computer processing time of up to 90% have been obtained by using the cycle jumping technique. A further cycle jumping technique has been developed for full component analysis, using a viscoplastic damage finite element solver, which enables stress redistribution to be modelled. The behaviour and lifetime of a slag tap component has been predicted when subjected to cyclic thermal loading. Cyclic plasticity damage and micro-crack initiation is predicted to occur at the water cooling duct after 2.974 cycles, with damage and micro-crack evolution arresting after 60.000. (author). 18 refs., 13 figs., 4 photos

  13. Near-real-time regional troposphere models for the GNSS precise point positioning technique

    International Nuclear Information System (INIS)

    Hadas, T; Kaplon, J; Bosy, J; Sierny, J; Wilgan, K

    2013-01-01

    The GNSS precise point positioning (PPP) technique requires high quality product (orbits and clocks) application, since their error directly affects the quality of positioning. For real-time purposes it is possible to utilize ultra-rapid precise orbits and clocks which are disseminated through the Internet. In order to eliminate as many unknown parameters as possible, one may introduce external information on zenith troposphere delay (ZTD). It is desirable that the a priori model is accurate and reliable, especially for real-time application. One of the open problems in GNSS positioning is troposphere delay modelling on the basis of ground meteorological observations. Institute of Geodesy and Geoinformatics of Wroclaw University of Environmental and Life Sciences (IGG WUELS) has developed two independent regional troposphere models for the territory of Poland. The first one is estimated in near-real-time regime using GNSS data from a Polish ground-based augmentation system named ASG-EUPOS established by Polish Head Office of Geodesy and Cartography (GUGiK) in 2008. The second one is based on meteorological parameters (temperature, pressure and humidity) gathered from various meteorological networks operating over the area of Poland and surrounding countries. This paper describes the methodology of both model calculation and verification. It also presents results of applying various ZTD models into kinematic PPP in the post-processing mode using Bernese GPS Software. Positioning results were used to assess the quality of the developed models during changing weather conditions. Finally, the impact of model application to simulated real-time PPP on precision, accuracy and convergence time is discussed. (paper)

  14. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  15. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  16. Modeling techniques for predicting long-term consequences of the effects of radiation on natural aquatic populations and ecosystems

    International Nuclear Information System (INIS)

    Van Winkle, W.

    1977-01-01

    Appropriate modeling techniques already exist for investigating some long-term consequences of the effects of radiation on natural aquatic populations and ecosystems, even if to date these techniques have not been used for this purpose. At the low levels of irradiation estimated to occur in natural aquatic systems, effects are difficult to detect at even the individual level much less the population or ecosystem level where the subtle effects of radiation are likely to be completely overshadowed by the effects of other environmental factors and stresses and the natural variability of the system. The claim that population and ecosystem models can be accurate and reliable predictive tools in assessing any stress has been oversold. Nonetheless, the use of these tools can be useful for learning more about the effects of radioactive releases on aquatic populations and ecosystems

  17. GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method

    Science.gov (United States)

    Marotta, G. S.

    2017-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).

  18. QUEST: A model to quantify uncertain emergency search techniques, theory and application

    International Nuclear Information System (INIS)

    Johnson, M.M.; Goldsby, M.E.; Plantenga, T.D.; Wilcox, W.B.; Hensley, W.K.

    1996-01-01

    As recent world events show, criminal and terrorist access to nuclear materials is a growing national concern. The national laboratories are taking the lead in developing technologies to counter these potential threats to our national security. Sandia National Laboratories, with support from Pacific Northwest Laboratory and the Remote Sensing Laboratory, has developed QUEST (a model to Quantify Uncertain Emergency Search Techniques), to enhance the performance of organizations in the search for lost or stolen nuclear material. In addition, QUEST supports a wide range of other applications, such as environmental monitoring, nuclear facilities inspections, and searcher training. QUEST simulates the search for nuclear materials and calculates detector response fro various source types and locations. The probability of detecting a radioactive source during a search is a function of many different variables. Through calculation of dynamic detector response, QUEST makes possible quantitative comparisons of various sensor technologies and search patterns. The QUEST model can be used to examine the impact of new detector technologies, explore alternative search concepts, and provide interactive search/inspector training

  19. Modeling the tagged-neutron UXO identification technique using the Geant4 toolkit

    International Nuclear Information System (INIS)

    Zhou, Y.; Zhu, X.; Wang, Y.; Mitra, S.

    2012-01-01

    It is proposed to use 14 MeV neutrons tagged by the associated particle neutron time-of-flight technique (APnTOF) to identify the fillers of unexploded ordnances (UXO) by characterizing their carbon, nitrogen and oxygen contents. To facilitate the design and construction of a prototype system, a preliminary simulation model was developed, using the Geant4 toolkit. This work established the toolkit environment for (a) generating tagged neutrons, (b) their transport and interactions within a sample to induce emission and detection of characteristic gamma-rays, and (c) 2D and 3D-image reconstruction of the interrogated object using the neutron and gamma-ray time-of-flight information. Using the modeling, this article demonstrates the novelty of the tagged-neutron approach for extracting useful signals with high signal-to-background discrimination of an object-of-interest from that of its environment. Simulations indicated that an UXO filled with the RDX explosive, hexogen (C 3 H 6 O 6 N 6 ), can be identified to a depth of 20 cm when buried in soil. (author)

  20. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    Science.gov (United States)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  1. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    Science.gov (United States)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  2. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Empirical techniques in finance

    CERN Document Server

    Bhar, Ramaprasad

    2005-01-01

    This book offers the opportunity to study and experience advanced empi- cal techniques in finance and in general financial economics. It is not only suitable for students with an interest in the field, it is also highly rec- mended for academic researchers as well as the researchers in the industry. The book focuses on the contemporary empirical techniques used in the analysis of financial markets and how these are implemented using actual market data. With an emphasis on Implementation, this book helps foc- ing on strategies for rigorously combing finance theory and modeling technology to extend extant considerations in the literature. The main aim of this book is to equip the readers with an array of tools and techniques that will allow them to explore financial market problems with a fresh perspective. In this sense it is not another volume in eco- metrics. Of course, the traditional econometric methods are still valid and important; the contents of this book will bring in other related modeling topics tha...

  4. Experimental techniques and theoretical models for the study of integral 14 MeV neutron cross sections

    International Nuclear Information System (INIS)

    Csikai, J.

    1981-01-01

    Owing to technical reasons, most of the data for fast neutron-induced reactions were measured at 14 MeV and the free parameters in nuclear reaction models have been determined at this energy. The discrepancies between experiment and theory are often due to the unmeasured or unreliable experimental data; therefore, it is important to survey the present techniques used for the measurement of total, elastic, nonelastic and partial nonelastic [(n,xn); (n,x charged); (n,f); (n,γ)] cross sections for 14 MeV neutrons. Systematics in the data as well as theoretical and semi-empirical models are also outlined. (author)

  5. Coronary artery plaques: Cardiac CT with model-based and adaptive-statistical iterative reconstruction technique

    International Nuclear Information System (INIS)

    Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo

    2012-01-01

    Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.

  6. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  7. Characterization technique for inhomogeneous 4H-SiC Schottky contacts: A practical model for high temperature behavior

    Science.gov (United States)

    Brezeanu, G.; Pristavu, G.; Draghici, F.; Badila, M.; Pascu, R.

    2017-08-01

    In this paper, a characterization technique for 4H-SiC Schottky diodes with varying levels of metal-semiconductor contact inhomogeneity is proposed. A macro-model, suitable for high-temperature evaluation of SiC Schottky contacts, with discrete barrier height non-uniformity, is introduced in order to determine the temperature interval and bias domain where electrical behavior of the devices can be described by the thermionic emission theory (has a quasi-ideal performance). A minimal set of parameters, the effective barrier height and peff, the non-uniformity factor, is associated. Model-extracted parameters are discussed in comparison with literature-reported results based on existing inhomogeneity approaches, in terms of complexity and physical relevance. Special consideration was given to models based on a Gaussian distribution of barrier heights on the contact surface. The proposed methodology is validated by electrical characterization of nickel silicide Schottky contacts on silicon carbide (4H-SiC), where a discrete barrier distribution can be considered. The same method is applied to inhomogeneous Pt/4H-SiC contacts. The forward characteristics measured at different temperatures are accurately reproduced using this inhomogeneous barrier model. A quasi-ideal behavior is identified for intervals spanning 200 °C for all measured Schottky samples, with Ni and Pt contact metals. A predictable exponential current-voltage variation over at least 2 orders of magnitude is also proven, with a stable barrier height and effective area for temperatures up to 400 °C. This application-oriented characterization technique is confirmed by using model parameters to fit a SiC-Schottky high temperature sensor's response.

  8. Dante's Comedy: precursors of psychoanalytic technique and psyche.

    Science.gov (United States)

    Szajnberg, Nathan Moses

    2010-02-01

    This paper uses a literary approach to explore what common ground exists in both psychoanalytic technique and views of the psyche, of 'person'. While Western literature has developed various views of psyche and person over centuries, there have been crystallizing, seminal portraits, for instance Shakespeare's perspective on what is human, some of which have endured to the present. By using Dante's Commedia, particularly the Inferno, a 14th century poem that both integrates and revises previous models of psyche and personhood, we can examine what features of psyche, and 'techniques' in soul-healing psychoanalysts have inherited culturally. Discovering basic features of technique and model of psyche we share as psychoanalysts permits us to explore why we have differences in variations on technique and models of inner life.

  9. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  10. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  11. Biodegradable Magnesium Stent Treatment of Saccular Aneurysms in a Rat Model - Introduction of the Surgical Technique.

    Science.gov (United States)

    Nevzati, Edin; Rey, Jeannine; Coluccia, Daniel; D'Alonzo, Donato; Grüter, Basil; Remonda, Luca; Fandino, Javier; Marbacher, Serge

    2017-10-01

    The steady progess in the armamentarium of techniques available for endovascular treatment of intracranial aneurysms requires affordable and reproducable experimental animal models to test novel embolization materials such as stents and flow diverters. The aim of the present project was to design a safe, fast, and standardized surgical technique for stent assisted embolization of saccular aneurysms in a rat animal model. Saccular aneurysms were created from an arterial graft from the descending aorta.The aneurysms were microsurgically transplanted through end-to-side anastomosis to the infrarenal abdominal aorta of a syngenic male Wistar rat weighing >500 g. Following aneurysm anastomosis, aneurysm embolization was performed using balloon expandable magnesium stents (2.5 mm x 6 mm). The stent system was retrograde introduced from the lower abdominal aorta using a modified Seldinger technique. Following a pilot series of 6 animals, a total of 67 rats were operated according to established standard operating procedures. Mean surgery time, mean anastomosis time, and mean suturing time of the artery puncture site were 167 ± 22 min, 26 ± 6 min and 11 ± 5 min, respectively. The mortality rate was 6% (n=4). The morbidity rate was 7.5% (n=5), and in-stent thrombosis was found in 4 cases (n=2 early, n=2 late in stent thrombosis). The results demonstrate the feasibility of standardized stent occlusion of saccular sidewall aneurysms in rats - with low rates of morbidity and mortality. This stent embolization procedure combines the opportunity to study novel concepts of stent or flow diverter based devices as well as the molecular aspects of healing.

  12. Numerical models: Detailing and simulation techniques aimed at comparison with experimental data, support to test result interpretation

    International Nuclear Information System (INIS)

    Lin Chiwen

    2001-01-01

    This part of the presentation discusses the modelling details required and the simulation techniques available for analyses, facilitating the comparison with the experimental data and providing support for interpretation of the test results. It is organised to cover the following topics: analysis inputs; basic modelling requirements for reactor coolant system; method applicable for reactor cooling system; consideration of damping values and integration time steps; typical analytic models used for analysis of reactor pressure vessel and internals; hydrodynamic mass and fluid damping for the internal analysis; impact elements for fuel analysis; and PEI theorem and its applications. The intention of these topics is to identify the key parameters associated with models of analysis and analytical methods. This should provide proper basis for useful comparison with the test results

  13. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    Science.gov (United States)

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  14. Use of Video Modeling to Teach Weight Lifting Techniques to Adults with Down Syndrome: A Pilot Study

    Science.gov (United States)

    Carter, Kathleen; Pennington, Robert; Ledford, Elizabeth

    2017-01-01

    As adults with Down syndrome (DS) age, their strength decreases resulting in difficulty performing activities of daily living. In the current study, we investigated the use of video modeling for teaching three adults with DS to perform weight lifting techniques. A multiple probe design across behaviors (i.e., lifts) was used to evaluate…

  15. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae) habitat and population densities.

    Science.gov (United States)

    Al-Kindi, Khalifa M; Kwan, Paul; R Andrew, Nigel; Welch, Mitchell

    2017-01-01

    In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus . An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  16. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae habitat and population densities

    Directory of Open Access Journals (Sweden)

    Khalifa M. Al-Kindi

    2017-08-01

    Full Text Available In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  17. COST Action TU1208 - Working Group 3 - Electromagnetic modelling, inversion, imaging and data-processing techniques for Ground Penetrating Radar

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the

  18. Big data - modelling of midges in Europa using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Cuellar, Ana Carolina; Kjær, Lene Jung; Skovgaard, Henrik

    2017-01-01

    coordinates of each trap, start and end dates of trapping. We used 120 environmental predictor variables together with Random Forest machine learning algorithms to predict the overall species distribution (probability of occurrence) and monthly abundance in Europe. We generated maps for every month...... and the Obsoletus group, although abundance was generally higher for a longer period of time for C. imicula than for the Obsoletus group. Using machine learning techniques, we were able to model the spatial distribution in Europe for C. imicola and the Obsoletus group in terms of abundance and suitability...

  19. Modelling skin penetration using the Laplace transform technique.

    Science.gov (United States)

    Anissimov, Y G; Watkinson, A

    2013-01-01

    The Laplace transform is a convenient mathematical tool for solving ordinary and partial differential equations. The application of this technique to problems arising in drug penetration through the skin is reviewed in this paper. © 2013 S. Karger AG, Basel.

  20. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  1. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  2. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  3. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  4. Prostate Cancer Probability Prediction By Machine Learning Technique.

    Science.gov (United States)

    Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

    2017-11-26

    The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

  5. Modular techniques for dynamic fault-tree analysis

    Science.gov (United States)

    Patterson-Hine, F. A.; Dugan, Joanne B.

    1992-01-01

    It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.

  6. Comparison of groundwater residence time using isotope techniques and numerical groundwater flow model in Gneissic Terrain, Korea

    International Nuclear Information System (INIS)

    Bae, D.S.; Kim, C.S.; Koh, Y.K.; Kim, K.S.; Song, M.Y.

    1997-01-01

    The prediction of groundwater flow affecting the migration of radionuclides is an important component of the performance assessment of radioactive waste disposal. Groundwater flow in fractured rock mass is controlled by fracture networks, transmissivity and hydraulic gradient. Furthermore the scale-dependent and anisotropic properties of hydraulic parameters are resulted mainly from irregular patterns of fracture system, which are very complex to evaluate properly with the current techniques available. For the purpose of characterizing a groundwater flow in fractured rock mass, the discrete fracture network (DFN) concept is available on the basis of assumptions of groundwater flowing only along fractures and flowpaths in rock mass formed by interconnected fractures. To increase the reliability of assessment in groundwater flow phenomena, numerical groundwater flow model and isotopic techniques were applied. Fracture mapping, borehole acoustic scanning were performed to identify conductive fractures in gneissic terrane. Tracer techniques, using deuterium, oxygen-18 and tritium were applied to evaluate the recharge area and groundwater residence time

  7. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  8. A COGNITIVE APPROACH TO CORPORATE GOVERNANCE: A VISUALIZATION TEST OF MENTAL MODELS WITH THE COGNITIVE MAPPING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Garoui NASSREDDINE

    2012-01-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the fi rm with respect to the cognitive approach of corporate governance. The paper takes a corporate governance perspective, discusses mental models and uses the cognitive map to view the diagrams showing the ways of thinking and the conceptualization of the cognitive approach. In addition, it employs a cognitive mapping technique. Returning to the systematic exploration of grids for each actor, it concludes that there is a balance of concepts expressing their cognitive orientation.

  9. Reactor vital equipment determination techniques

    International Nuclear Information System (INIS)

    Bott, T.F.; Thomas, W.S.

    1983-01-01

    The Reactor Vital Equipment Determination Techniques program at the Los Alamos National Laboratory is discussed. The purpose of the program is to provide the Nuclear Regulatory Commission (NRC) with technical support in identifying vital areas at nuclear power plants using a fault-tree technique. A reexamination of some system modeling assumptions is being performed for the Vital Area Analysis Program. A short description of the vital area analysis and supporting research on modeling assumptions is presented. Perceptions of program modifications based on the research are outlined, and the status of high-priority research topics is discussed

  10. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  11. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  12. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  13. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    International Nuclear Information System (INIS)

    Magee, Derek; Tanner, Steven F; Jeavons, Alan P; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis

    2010-01-01

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  14. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation

    Energy Technology Data Exchange (ETDEWEB)

    Neufeld, E [Foundation for Research on Information Technologies in Society (IT' IS), ETH Zurich, 8092 Zurich (Switzerland); Chavannes, N [Foundation for Research on Information Technologies in Society (IT' IS), ETH Zurich, 8092 Zurich (Switzerland); Samaras, T [Radiocommunications Laboratory, Aristotle University of Thessaloniki, GR-54124 Thessaloniki (Greece); Kuster, N [Foundation for Research on Information Technologies in Society (IT' IS), ETH Zurich, 8092 Zurich (Switzerland)

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  15. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  16. Modeling and simulation of PEM fuel cell's flow channels using CFD techniques

    International Nuclear Information System (INIS)

    Cunha, Edgar F.; Andrade, Alexandre B.; Robalinho, Eric; Bejarano, Martha L.M.; Linardi, Marcelo; Cekinski, Efraim

    2007-01-01

    Fuel cells are one of the most important devices to obtain electrical energy from hydrogen. The Proton Exchange Membrane Fuel Cell (PEMFC) consists of two important parts: the Membrane Electrode Assembly (MEA), where the reactions occur, and the flow field plates. The plates have many functions in a fuel cell: distribute reactant gases (hydrogen and air or oxygen), conduct electrical current, remove heat and water from the electrodes and make the cell robust. The cost of the bipolar plates corresponds up to 45% of the total stack costs. The Computational Fluid Dynamic (CFD) is a very useful tool to simulate hydrogen and oxygen gases flow channels, to reduce the costs of bipolar plates production and to optimize mass transport. Two types of flow channels were studied. The first type was a commercial plate by ELECTROCELL and the other was entirely projected at Programa de Celula a Combustivel (IPEN/CNEN-SP) and the experimental data were compared with modelling results. Optimum values for each set of variables were obtained and the models verification was carried out in order to show the feasibility of this technique to improve fuel cell efficiency. (author)

  17. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  18. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  19. The role of model-based methods in the development of single scan techniques

    International Nuclear Information System (INIS)

    Laruelle, Marc

    2000-01-01

    Single scan techniques are highly desirable for clinical trials involving radiotracers because they increase logistical feasibility, improve patient compliance, and decrease the cost associated with the study. However, the information derived from single scans usually are biased by factors unrelated to the process of interest. Therefore, identification of these factors and evaluation of their impact on the proposed outcome measure is important. In this paper, the impact of confounding factors on single scan measurements is illustrated by discussing the effect of between-subject or between-condition differences in radiotracer plasma clearance on normalized activity ratios (specific to nonspecific ratios) in the tissue of interest. Computer simulation based on kinetic analyses are presented to demonstrate this effect. It is proposed that the presence of this and other confounding factors should not necessarily preclude clinical trials based on single scan techniques. First, knowledge of the distribution of plasma clearance values in a sample of the investigated population allows researchers to assign limits to this potential bias. This information can be integrated in the power analysis. Second, the impact of this problem will vary according to the characteristic of the radiotracer, and this information can be used in the development and selection of the radiotracer. Third, simple modification of the experimental design (such as administration of the radiotracer as a bolus, followed by constant infusion, rather than as a single bolus) might remove this potential confounding factor and allow appropriate quantification within the limits of a single scanning session. In conclusion, model-based kinetic characterization of radiotracer distribution and uptake is critical to the design and interpretation of clinical trials based on single scan techniques

  20. Repositioning the knee joint in human body FE models using a graphics-based technique.

    Science.gov (United States)

    Jani, Dhaval; Chawla, Anoop; Mukherjee, Sudipto; Goyal, Rahul; Vusirikala, Nataraju; Jayaraman, Suresh

    2012-01-01

    Human body finite element models (FE-HBMs) are available in standard occupant or pedestrian postures. There is a need to have FE-HBMs in the same posture as a crash victim or to be configured in varying postures. Developing FE models for all possible positions is not practically viable. The current work aims at obtaining a posture-specific human lower extremity model by reconfiguring an existing one. A graphics-based technique was developed to reposition the lower extremity of an FE-HBM by specifying the flexion-extension angle. Elements of the model were segregated into rigid (bones) and deformable components (soft tissues). The bones were rotated about the flexion-extension axis followed by rotation about the longitudinal axis to capture the twisting of the tibia. The desired knee joint movement was thus achieved. Geometric heuristics were then used to reposition the skin. A mapping defined over the space between bones and the skin was used to regenerate the soft tissues. Mesh smoothing was then done to augment mesh quality. The developed method permits control over the kinematics of the joint and maintains the initial mesh quality of the model. For some critical areas (in the joint vicinity) where element distortion is large, mesh smoothing is done to improve mesh quality. A method to reposition the knee joint of a human body FE model was developed. Repositions of a model from 9 degrees of flexion to 90 degrees of flexion in just a few seconds without subjective interventions was demonstrated. Because the mesh quality of the repositioned model was maintained to a predefined level (typically to the level of a well-made model in the initial configuration), the model was suitable for subsequent simulations.

  1. Real-time kinetic modeling of YSZ thin film roughness deposited by e-beam evaporation technique

    International Nuclear Information System (INIS)

    Galdikas, A.; Cerapaite-Trusinskiene, R.; Laukaitis, G.; Dudonis, J.

    2008-01-01

    In the present study, the process of yttrium-stabilized zirconia (YSZ) thin films deposition on optical quartz (SiO 2 ) substrates using e-beam deposition technique controlling electron gun power is analyzed. It was found that electron gun power influences the non-monotonous kinetics of YSZ film surface roughness. The evolution of YSZ thin film surface roughness was analyzed by a kinetic model. The model is based on the rate equations and includes processes of surface diffusion of the adatoms and the clusters, nucleation, growth and coalescence of islands in the case of thin film growth in Volmer-Weber mode. The analysis of the experimental results done by modeling explains non-monotonous kinetics and dependence of the surface roughness on the electron gun power. A good quantitative agreement with experimental results is obtained taking into account the initial roughness of the substrate surface and the amount of the clusters in the flux of evaporated material.

  2. Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique

    Directory of Open Access Journals (Sweden)

    Jahedul Islam Chowdhury

    2015-12-01

    Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.

  3. Developing material for promoting problem-solving ability through bar modeling technique

    Science.gov (United States)

    Widyasari, N.; Rosiyanti, H.

    2018-01-01

    This study aimed at developing material for enhancing problem-solving ability through bar modeling technique with thematic learning. Polya’s steps of problem-solving were chosen as the basis of the study. The methods of the study were research and development. The subject of this study were five teen students of the fifth grade of Lab-school FIP UMJ elementary school. Expert review and student’ response analysis were used to collect the data. Furthermore, the data were analyzed using qualitative descriptive and quantitative. The findings showed that material in theme “Selalu Berhemat Energi” was categorized as valid and practical. The validity was measured by using the aspect of language, contents, and graphics. Based on the expert comments, the materials were easy to implement in the teaching-learning process. In addition, the result of students’ response showed that material was both interesting and easy to understand. Thus, students gained more understanding in learning problem-solving.

  4. Meta-heuristic ant colony optimization technique to forecast the amount of summer monsoon rainfall: skill comparison with Markov chain model

    Science.gov (United States)

    Chaudhuri, Sutapa; Goswami, Sayantika; Das, Debanjana; Middey, Anirban

    2014-05-01

    Forecasting summer monsoon rainfall with precision becomes crucial for the farmers to plan for harvesting in a country like India where the national economy is mostly based on regional agriculture. The forecast of monsoon rainfall based on artificial neural network is a well-researched problem. In the present study, the meta-heuristic ant colony optimization (ACO) technique is implemented to forecast the amount of summer monsoon rainfall for the next day over Kolkata (22.6°N, 88.4°E), India. The ACO technique belongs to swarm intelligence and simulates the decision-making processes of ant colony similar to other adaptive learning techniques. ACO technique takes inspiration from the foraging behaviour of some ant species. The ants deposit pheromone on the ground in order to mark a favourable path that should be followed by other members of the colony. A range of rainfall amount replicating the pheromone concentration is evaluated during the summer monsoon season. The maximum amount of rainfall during summer monsoon season (June—September) is observed to be within the range of 7.5-35 mm during the period from 1998 to 2007, which is in the range 4 category set by the India Meteorological Department (IMD). The result reveals that the accuracy in forecasting the amount of rainfall for the next day during the summer monsoon season using ACO technique is 95 % where as the forecast accuracy is 83 % with Markov chain model (MCM). The forecast through ACO and MCM are compared with other existing models and validated with IMD observations from 2008 to 2012.

  5. A simplified indirect bonding technique

    Directory of Open Access Journals (Sweden)

    Radha Katiyar

    2014-01-01

    Full Text Available With the advent of lingual orthodontics, indirect bonding technique has become an integral part of practice. It involves placement of brackets initially on the models and then their transfer to teeth with the help of transfer trays. Problems encountered with current indirect bonding techniques used are (1 the possibility of adhesive flash remaining around the base of the brackets which requires removal (2 longer time required for the adhesive to gain enough bond strength for secure tray removal. The new simplified indirect bonding technique presented here overcomes both these problems.

  6. Modeling ionospheric foF 2 response during geomagnetic storms using neural network and linear regression techniques

    Science.gov (United States)

    Tshisaphungo, Mpho; Habarulema, John Bosco; McKinnell, Lee-Anne

    2018-06-01

    In this paper, the modeling of the ionospheric foF 2 changes during geomagnetic storms by means of neural network (NN) and linear regression (LR) techniques is presented. The results will lead to a valuable tool to model the complex ionospheric changes during disturbed days in an operational space weather monitoring and forecasting environment. The storm-time foF 2 data during 1996-2014 from Grahamstown (33.3°S, 26.5°E), South Africa ionosonde station was used in modeling. In this paper, six storms were reserved to validate the models and hence not used in the modeling process. We found that the performance of both NN and LR models is comparable during selected storms which fell within the data period (1996-2014) used in modeling. However, when validated on storm periods beyond 1996-2014, the NN model gives a better performance (R = 0.62) compared to LR model (R = 0.56) for a storm that reached a minimum Dst index of -155 nT during 19-23 December 2015. We also found that both NN and LR models are capable of capturing the ionospheric foF 2 responses during two great geomagnetic storms (28 October-1 November 2003 and 6-12 November 2004) which have been demonstrated to be difficult storms to model in previous studies.

  7. A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.

    Science.gov (United States)

    van Dongen, Koen W A; Wright, William M D

    2006-10-01

    Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.

  8. Data Analysis Techniques for Physical Scientists

    Science.gov (United States)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  9. A Model for the Acceptance of Cloud Computing Technology Using DEMATEL Technique and System Dynamics Approach

    Directory of Open Access Journals (Sweden)

    seyyed mohammad zargar

    2018-03-01

    Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.

  10. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  11. Personnel Audit Using a Forensic Mining Technique

    OpenAIRE

    Adesesan B. Adeyemo; Oluwafemi Oriola

    2010-01-01

    This paper applies forensic data mining to determine the true status of employees and thereafter provide useful evidences for proper administration of administrative rules in a Typical Nigerian Teaching Service. The conventional technique of personnel audit was studied and a new technique for personnel audit was modeled using Artificial Neural Networks and Decision Tree algorithms. Atwo-layer classifier architecture was modeled. The outcome of the experiment proved that Radial Basis Function ...

  12. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  13. A new noninvasive controlled intra-articular ankle distraction technique on a cadaver model.

    Science.gov (United States)

    Aydin, Ahmet T; Ozcanli, Haluk; Soyuncu, Yetkin; Dabak, Tayyar K

    2006-08-01

    Effective joint distraction is crucial in arthroscopic ankle surgery. We describe an effective and controlled intra-articular ankle distraction technique that we have studied by means of a fresh-frozen cadaver model. Using a kyphoplasty balloon, which is currently used in spine surgery, we tried to achieve a controlled distraction. After the fixation of the cadaver model, standard anteromedial and anterolateral portals were used for ankle arthroscopy. From the same portals, the kyphoplasty balloon was inserted and placed in an appropriate position intra-articularly. The necessary amount of distraction was achieved by inflating the kyphoplasty balloon with a pressure regulation pump. All anatomic sites of the ankle joint were easily visualized with the arthroscope during surgery by changing the pressure and the intra-articular position of the kyphoplasty balloon. Ankle distraction was clearly seen on the arthroscopic and image intensifier view. The kyphoplasty balloon is simple to place through the standard portals and the advantage is that it allows easy manipulation of the arthroscopic instruments from the same portal.

  14. Numerical modelling techniques of soft soil improvement via stone columns: A brief review

    Science.gov (United States)

    Zukri, Azhani; Nazir, Ramli

    2018-04-01

    There are a number of numerical studies on stone column systems in the literature. Most of the studies found were involved with two-dimensional analysis of the stone column behaviour, while only a few studies used three-dimensional analysis. The most popular software utilised in those studies was Plaxis 2D and 3D. Other types of software that used for numerical analysis are DIANA, EXAMINE, ZSoil, ABAQUS, ANSYS, NISA, GEOSTUDIO, CRISP, TOCHNOG, CESAR, GEOFEM (2D & 3D), FLAC, and FLAC 3. This paper will review the methodological approaches to model stone column numerically, both in two-dimensional and three-dimensional analyses. The numerical techniques and suitable constitutive model used in the studies will also be discussed. In addition, the validation methods conducted were to verify the numerical analysis conducted will be presented. This review paper also serves as a guide for junior engineers through the applicable procedures and considerations when constructing and running a two or three-dimensional numerical analysis while also citing numerous relevant references.

  15. Fiscal 1997 report of the verification research on geothermal prospecting technology. Theme 5-2. Development of a reservoir change prospecting method (reservoir change prediction technique (modeling support technique)); 1997 nendo chinetsu tansa gijutsu nado kensho chosa. 5-2. Choryuso hendo tansaho kaihatsu (choryuso hendo yosoku gijutsu (modeling shien gijutsu)) hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    To evaluate geothermal reservoirs in the initial stage of development, to keep stable output in service operation, and to develop a technology effective for extraction from peripheral reservoirs, study was made on a reservoir variation prediction technique, in particular, a modeling support technique. This paper describes the result in fiscal 1997. Underground temperature estimation technique using homogenization temperatures of fluid inclusions among core fault system measurement systems was applied to Wasabizawa field. The effect of stretching is important to estimate reservoir temperatures, and use of a minimum homogenization temperature of fluid inclusions in quartz was suitable. Even in the case of no quartz in hydrothermal veins, measured data of quartz (secondary fluid inclusion) in parent rocks adjacent to hydrothermal veins well agreed with measured temperature data. The developmental possibility of a new modeling support technique was confirmed enough through collection of documents and information. Based on the result, measurement equipment suitable for R and D was selected, and a measurement system was established through preliminary experiments. 39 refs., 35 figs., 6 tabs.

  16. The preparation of aneurysm model in rabbits by vessel ligation and elastase-induced technique

    International Nuclear Information System (INIS)

    Lu Chuan; Xie Qianyu; Liu Linxiang

    2010-01-01

    Objective: To establish an aneurysm model, which is quite similar to the human intracranial aneurysm in morphology, in rabbits by means of vessel ligation together with elastase-induced technique. Methods: Sixteen New Zealand white rabbits were used in this study. Distal carotid ligation and intraluminal elastase incubation was employed in ten rabbits (study group) to create aneurysm on the right common carotid artery. And surgical suture of a segment of the left carotid common artery was carried out in six rabbits (used as control group) to establish the aneurysm model. DSA exam of the created aneurysms by using catheterization via femoral artery was performed at one week and at one month after surgery. The patency, morphology and pathology of the aneurysms were observed. The results were statistically analyzed. Results: The aneurysms in both groups remained patent after they were created. Angiography one week after the surgery showed that all the aneurysms in study group were patent, while in control group only two aneurysms showed opacification with contrast medium and the remaining four aneurysms were all occluded. DSA at one month after the procedure demonstrated that all the aneurysms in study group remained patent, and the previous two patent aneurysms in control group became occluded. The mean width and length of the aneurysmal cavity in study group immediately after the procedure were (3.70 ± 0.16) mm and (6.53 ± 0.65) mm respectively, which enlarged to (5.06 ± 0.31) mm and (9.0 ± 0.52) mm respectively one month after the surgery. The difference in size changes was statistically significant (P < 0.05). Pathologically, almost complete absence of the internal elastic lamina and medial wall elastin of the aneurysms was observed. Conclusion: The aneurysm model prepared with vessel ligation together with elastase-induced technique carries high patent rate and possesses the feature of spontaneous growing, moreover, its morphology is quite similar to the

  17. Effect of the Impeller Design on Degasification Kinetics Using the Impeller Injector Technique Assisted by Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Diego Abreu-López

    2017-04-01

    Full Text Available A mathematical model was developed to describe the hydrodynamics of a batch reactor for aluminum degassing utilizing the rotor-injector technique. The mathematical model uses the Eulerian algorithm to represent the two-phase system including the simulation of vortex formation at the free surface, and the use of the RNG k-ε model to account for the turbulence in the system. The model was employed to test the performances of three different impeller designs, two of which are available commercially, while the third one is a new design proposed in previous work. The model simulates the hydrodynamics and consequently helps to explain and connect the performances in terms of degassing kinetics and gas consumption found in physical modeling previously reported. Therefore, the model simulates a water physical model. The model reveals that the new impeller design distributes the bubbles more uniformly throughout the ladle, and exhibits a better-agitated bath, since the transfer of momentum to the fluids is better. Gas is evenly distributed with this design because both phases, gas and liquid, are dragged to the bottom of the ladle as a result of the higher pumping effect in comparison to the commercial designs.

  18. Study of 3D bathymetry modelling using LAPAN Surveillance Unmanned Aerial Vehicle 02 (LSU-02) photo data with stereo photogrammetry technique, Wawaran Beach, Pacitan, East Java, Indonesia

    Science.gov (United States)

    Sari, N. M.; Nugroho, J. T.; Chulafak, G. A.; Kushardono, D.

    2018-05-01

    Coastal is an ecosystem that has unique object and phenomenon. The potential of the aerial photo data with very high spatial resolution covering coastal area is extensive. One of the aerial photo data can be used is LAPAN Surveillance UAV 02 (LSU-02) photo data which is acquired in 2016 with a spatial resolution reaching 10cm. This research aims to create an initial bathymetry model with stereo photogrammetry technique using LSU-02 data. In this research the bathymetry model was made by constructing 3D model with stereo photogrammetry technique that utilizes the dense point cloud created from overlapping of those photos. The result shows that the 3D bathymetry model can be built with stereo photogrammetry technique. It can be seen from the surface and bathymetry transect profile.

  19. Mastering data warehouse design relational and dimensional techniques

    CERN Document Server

    Imhoff, Claudia; Geiger, Jonathan G

    2003-01-01

    A cutting-edge response to Ralph Kimball''s challenge to the data warehouse community that answers some tough questions about the effectiveness of the relational approach to data warehousingWritten by one of the best-known exponents of the Bill Inmon approach to data warehousingAddresses head-on the tough issues raised by Kimball and explains how to choose the best modeling technique for solving common data warehouse design problemsWeighs the pros and cons of relational vs. dimensional modeling techniquesFocuses on tough modeling problems, including creating and maintaining keys and modeling c

  20. Determination of rock depth using artificial intelligence techniques

    Institute of Scientific and Technical Information of China (English)

    R. Viswanathan; Pijush Samui

    2016-01-01

    This article adopts three artificial intelligence techniques, Gaussian Process Regression (GPR), Least Square Support Vector Machine (LSSVM) and Extreme Learning Machine (ELM), for prediction of rock depth (d) at any point in Chennai. GPR, ELM and LSSVM have been used as regression techniques. Latitude and longitude are also adopted as inputs of the GPR, ELM and LSSVM models. The performance of the ELM, GPR and LSSVM models has been compared. The developed ELM, GPR and LSSVM models produce spatial variability of rock depth and offer robust models for the prediction of rock depth.

  1. Artificial Intelligence techniques for mission planning for mobile robots

    International Nuclear Information System (INIS)

    Martinez, J.M.; Nomine, J.P.

    1990-01-01

    This work focuses on Spatial Modelization Techniques and on Control Software Architectures, in order to deal efficiently with the Navigation and Perception problems encountered in Mobile Autonomous Robotics. After a brief survey of the current various approaches for these techniques, we expose ongoing simulation works for a specific mission in robotics. Studies in progress used for Spatial Reasoning are based on new approaches combining Artificial Intelligence and Geometrical techniques. These methods deal with the problem of environment modelization using three types of models: geometrical topological and semantic models at different levels. The decision making processes of control are presented as the result of cooperation between a group of decentralized agents that communicate by sending messages. (author)

  2. Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-07-01

    The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.

  3. Mechanical Elongation of the Small Intestine: Evaluation of Techniques for Optimal Screw Placement in a Rodent Model

    Directory of Open Access Journals (Sweden)

    P. A. Hausbrandt

    2013-01-01

    Full Text Available Introduction. The aim of this study was to evaluate techniques and establish an optimal method for mechanical elongation of small intestine (MESI using screws in a rodent model in order to develop a potential therapy for short bowel syndrome (SBS. Material and Methods. Adult female Sprague Dawley rats (n=24 with body weight from 250 to 300 g (Σ=283 were evaluated using 5 different groups in which the basic denominator for the technique involved the fixation of a blind loop of the intestine on the abdominal wall with the placement of a screw in the lumen secured to the abdominal wall. Results. In all groups with accessible screws, the rodents removed the implants despite the use of washers or suits to prevent removal. Subcutaneous placement of the screw combined with antibiotic treatment and dietary modifications was finally successful. In two animals autologous transplantation of the lengthened intestinal segment was successful. Discussion. While the rodent model may provide useful basic information on mechanical intestinal lengthening, further investigations should be performed in larger animals to make use of the translational nature of MESI in human SBS treatment.

  4. MO-AB-BRA-09: Development and Evaluation of a Biomechanical Modeling-Assisted CBCT Reconstruction Technique (Bio-Recon)

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y; Nasehi Tehrani, J; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop a Bio-recon technique by incorporating the biomechanical properties of anatomical structures into the deformation-based CBCT reconstruction process. Methods: Bio-recon reconstructs the CBCT by deforming a prior high-quality CT/CBCT using a deformation-vector-field (DVF). The DVF is solved through two alternating steps: 2D–3D deformation and finite-element-analysis based biomechanical modeling. 2D–3D deformation optimizes the DVF through an ‘intensity-driven’ approach, which updates the DVF to minimize intensity mismatches between the acquired projections and the simulated projections from the deformed CBCT. In contrast, biomechanical modeling optimizes the DVF through a ‘biomechanical-feature-driven’ approach, which updates the DVF based on the biophysical properties of anatomical structures. In general, Biorecon extracts the 2D–3D deformation-optimized DVF at high-contrast structure boundaries, and uses it as the boundary condition to drive biomechanical modeling to optimize the overall DVF, especially at low-contrast regions. The optimized DVF is fed back into the 2D–3D deformation for further optimization, which forms an iterative loop. The efficacy of Bio-recon was evaluated on 11 lung patient cases, each with a prior CT and a new CT. Cone-beam projections were generated from the new CTs to reconstruct CBCTs, which were compared with the original new CTs for evaluation. 872 anatomical landmarks were also manually identified by a clinician on both the prior and new CTs to track the lung motion, which was used to evaluate the DVF accuracy. Results: Using 10 projections for reconstruction, the average (± s.d.) relative errors of reconstructed CBCTs by the clinical FDK technique, the 2D–3D deformation-only technique and Bio-recon were 46.5±5.9%, 12.0±2.3% and 10.4±1.3%, respectively. The average residual errors of DVF-tracked landmark motion by the 2D–3D deformation-only technique and Bio-recon were 5.6±4.3mm and 3.1±2

  5. MO-AB-BRA-09: Development and Evaluation of a Biomechanical Modeling-Assisted CBCT Reconstruction Technique (Bio-Recon)

    International Nuclear Information System (INIS)

    Zhang, Y; Nasehi Tehrani, J; Wang, J

    2016-01-01

    Purpose: To develop a Bio-recon technique by incorporating the biomechanical properties of anatomical structures into the deformation-based CBCT reconstruction process. Methods: Bio-recon reconstructs the CBCT by deforming a prior high-quality CT/CBCT using a deformation-vector-field (DVF). The DVF is solved through two alternating steps: 2D–3D deformation and finite-element-analysis based biomechanical modeling. 2D–3D deformation optimizes the DVF through an ‘intensity-driven’ approach, which updates the DVF to minimize intensity mismatches between the acquired projections and the simulated projections from the deformed CBCT. In contrast, biomechanical modeling optimizes the DVF through a ‘biomechanical-feature-driven’ approach, which updates the DVF based on the biophysical properties of anatomical structures. In general, Biorecon extracts the 2D–3D deformation-optimized DVF at high-contrast structure boundaries, and uses it as the boundary condition to drive biomechanical modeling to optimize the overall DVF, especially at low-contrast regions. The optimized DVF is fed back into the 2D–3D deformation for further optimization, which forms an iterative loop. The efficacy of Bio-recon was evaluated on 11 lung patient cases, each with a prior CT and a new CT. Cone-beam projections were generated from the new CTs to reconstruct CBCTs, which were compared with the original new CTs for evaluation. 872 anatomical landmarks were also manually identified by a clinician on both the prior and new CTs to track the lung motion, which was used to evaluate the DVF accuracy. Results: Using 10 projections for reconstruction, the average (± s.d.) relative errors of reconstructed CBCTs by the clinical FDK technique, the 2D–3D deformation-only technique and Bio-recon were 46.5±5.9%, 12.0±2.3% and 10.4±1.3%, respectively. The average residual errors of DVF-tracked landmark motion by the 2D–3D deformation-only technique and Bio-recon were 5.6±4.3mm and 3.1±2

  6. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  7. Modelling desertification risk in the north-west of Jordan using geospatial and remote sensing techniques

    Directory of Open Access Journals (Sweden)

    Jawad T. Al-Bakri

    2016-03-01

    Full Text Available Remote sensing, climate, and ground data were used within a geographic information system (GIS to map desertification risk in the north-west of Jordan. The approach was based on modelling wind and water erosion and incorporating the results with a map representing the severity of drought. Water erosion was modelled by the universal soil loss equation, while wind erosion was modelled by a dust emission model. The extent of drought was mapped using the evapotranspiration water stress index (EWSI which incorporated actual and potential evapotranspiration. Output maps were assessed within GIS in terms of spatial patterns and the degree of correlation with soil surficial properties. Results showed that both topography and soil explained 75% of the variation in water erosion, while soil explained 25% of the variation in wind erosion, which was mainly controlled by natural factors of topography and wind. Analysis of the EWSI map showed that drought risk was dominating most of the rainfed areas. The combined effects of soil erosion and drought were reflected on the desertification risk map. The adoption of these geospatial and remote sensing techniques is, therefore, recommended to map desertification risk in Jordan and in similar arid environments.

  8. Spray deposition using impulse atomization technique

    International Nuclear Information System (INIS)

    Ellendt, N.; Schmidt, R.; Knabe, J.; Henein, H.; Uhlenwinkel, V.

    2004-01-01

    A novel technique, impulse atomization, has been used for spray deposition. This single fluid atomization technique leads to different spray characteristics and impact conditions of the droplets compared to gas atomization technique which is the common technique used for spray deposition. Deposition experiments with a Cu-6Sn alloy were conducted to evaluate the appropriateness of impulse atomization to produce dense material. Based on these experiments, a model has been developed to simulate the thermal history and the local solidification rates of the deposited material. A numerical study shows how different cooling conditions affect the solidification rate of the material

  9. Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques

    Directory of Open Access Journals (Sweden)

    Nabila Khodeir

    2018-01-01

    Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook

  10. Object oriented programming techniques applied to device access and control

    International Nuclear Information System (INIS)

    Goetz, A.; Klotz, W.D.; Meyer, J.

    1992-01-01

    In this paper a model, called the device server model, has been presented for solving the problem of device access and control faced by all control systems. Object Oriented Programming techniques were used to achieve a powerful yet flexible solution. The model provides a solution to the problem which hides device dependancies. It defines a software framework which has to be respected by implementors of device classes - this is very useful for developing groupware. The decision to implement remote access in the root class means that device servers can be easily integrated in a distributed control system. A lot of the advantages and features of the device server model are due to the adoption of OOP techniques. The main conclusion that can be drawn from this paper is that 1. the device access and control problem is adapted to being solved with OOP techniques, 2. OOP techniques offer a distinct advantage over traditional programming techniques for solving the device access problem. (J.P.N.)

  11. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  12. Performance comparison of land change modeling techniques for land use projection of arid watersheds

    Directory of Open Access Journals (Sweden)

    S.M. Tajbakhsh

    2018-07-01

    Full Text Available The change of land use/land cover has been known as an imperative force in environmental alteration, especially in arid and semi-arid areas. This research was mainly aimed to assess the validity of two major types of land change modeling techniques via a three dimensional approach in Birjand urban watershed located in an arid climatic region of Iran. Thus, a Markovian approach based on two suitability and transition potential mappers, i.e. fuzzy analytic hierarchy process and artificial neural network-multi layer perceptron was used to simulate land use map. Validation metrics, quantity disagreement, allocation disagreement and figure of merit in a three-dimensional space were used to perform model validation. Utilizing the fuzzy-analytic hierarchy processsimulation of total landscape in the target point 2015, quantity error, the figure of merit and allocation error were 2%, 18.5% and 8%, respectively. However, Artificial neural network-multi layer perceptron simulation led to a marginal improvement in figure of merit, i.e. 3.25%.

  13. Validation of a computer modelled forensic facial reconstruction technique using CT data from live subjects: a pilot study.

    Science.gov (United States)

    Short, Laura J; Khambay, Balvinder; Ayoub, Ashraf; Erolin, Caroline; Rynn, Chris; Wilkinson, Caroline

    2014-04-01

    Human forensic facial soft tissue reconstructions are used when post-mortem deterioration makes identification difficult by usual means. The aim is to trigger recognition of the in vivo countenance of the individual by a friend or family member. A further use is in the field of archaeology. There are a number of different methods that can be applied to complete the facial reconstruction, ranging from two dimensional drawings, three dimensional clay models and now, with the advances of three dimensional technology, three dimensional computerised modelling. Studies carried out to assess the accuracy of facial reconstructions have produced variable results over the years. Advances in three dimensional imaging techniques in the field of oral and maxillofacial surgery, particularly cone beam computed tomography (CBCT), now provides an opportunity to utilise the data of live subjects and assess the accuracy of the three dimensional computerised facial reconstruction technique. The aim of this study was to assess the accuracy of a computer modelled facial reconstruction technique using CBCT data from live subjects. This retrospective pilot study was carried out at the Glasgow Dental Hospital Orthodontic Department and the Centre of Anatomy and Human Identification, Dundee University School of Life Sciences. Ten patients (5 male and 5 female; mean age 23 years) with mild skeletal discrepancies with pre-surgical cone beam CT data (CBCT) were included in this study. The actual and forensic reconstruction soft tissues were analysed using 3D software to look at differences between landmarks, linear and angular measurements and surface meshes. There were no statistical differences for 18 out of the 23 linear and 7 out of 8 angular measurements between the reconstruction and the target (p<0.05). The use of Procrustes superimposition has highlighted potential problems with soft tissue depth and anatomical landmarks' position. Surface mesh analysis showed that this virtual

  14. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  15. An Introduction to the Space Mapping Technique

    DEFF Research Database (Denmark)

    Bakr, Mohamed H.; Bandler, John W.; Madsen, Kaj

    2001-01-01

    The space mapping technique is intended for optimization of engineering models which involve very expensive function evaluations. It is assumed that two different models of the same physical system are available: Besides the expensive model of primary interest (denoted the fine model), access to ...

  16. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  17. A comparison of linear and nonlinear statistical techniques in performance attribution.

    Science.gov (United States)

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  18. Signal Morphing techniques and possible application to Higgs properties measurements

    CERN Document Server

    Ecker, Katharina Maria; The ATLAS collaboration

    2016-01-01

    One way of describing deviations from the Standard Model is via Effective Field Theories or pseudo-observables, where higher order operators modify the couplings and the kinematics of the interaction of the Standard Model particles. Generating Monte Carlo events for every testable set of parameters for such a theory would require computing resources beyond the ones currently available in ATLAS. Up to now, Matrix-Element based reweighting techniques have been often used to model Beyond Standard Model process starting from Standard Model simulated events. In this talk, we review the advantages and the limitations of morphing techniques to construct continuous probability model for signal parameters, interpolating between a finite number of distributions obtained from the simulation chain. The technique will be exemplified by searching for deviations from the Standard Model predictions in Higgs properties measurements.

  19. Towards Understanding Soil Forming in Santa Clotilde Critical Zone Observatory: Modelling Soil Mixing Processes in a Hillslope using Luminescence Techniques

    Science.gov (United States)

    Sanchez, A. R.; Laguna, A.; Reimann, T.; Giráldez, J. V.; Peña, A.; Wallinga, J.; Vanwalleghem, T.

    2017-12-01

    Different geomorphological processes such as bioturbation and erosion-deposition intervene in soil formation and landscape evolution. The latter processes produce the alteration and degradation of the materials that compose the rocks. The degree to which the bedrock is weathered is estimated through the fraction of the bedrock which is mixing in the soil either vertically or laterally. This study presents an analytical solution for the diffusion-advection equation to quantify bioturbation and erosion-depositions rates in profiles along a catena. The model is calibrated with age-depth data obtained from profiles using the luminescence dating based on single grain Infrared Stimulated Luminescence (IRSL). Luminescence techniques contribute to a direct measurement of the bioturbation and erosion-deposition processes. Single-grain IRSL techniques is applied to feldspar minerals of fifteen samples which were collected from four soil profiles at different depths along a catena in Santa Clotilde Critical Zone Observatory, Cordoba province, SE Spain. A sensitivity analysis is studied to know the importance of the parameters in the analytical model. An uncertainty analysis is carried out to stablish the better fit of the parameters to the measured age-depth data. The results indicate a diffusion constant at 20 cm in depth of 47 (mm2/year) in the hill-base profile and 4.8 (mm2/year) in the hilltop profile. The model has high uncertainty in the estimation of erosion and deposition rates. This study reveals the potential of luminescence single-grain techniques to quantify pedoturbation processes.

  20. Development and validation of predictive simulation model of multi-layer repair welding process by temper bead technique

    International Nuclear Information System (INIS)

    Okano, Shigetaka; Miyasaka, Fumikazu; Mochizuki, Masahito; Tanaka, Manabu

    2015-01-01

    Stress corrosion cracking (SCC) has recently been observed in the nickel base alloy weld metal of dissimilar pipe joint used in pressurized water reactor (PWR) . Temper bead technique has been developed as one of repair procedures against SCC applicable in case that post weld heat treatment (PWHT) is difficult to carry out. In this regard, however it is essential to pass the property and performance qualification test to confirm the effect of tempering on the mechanical properties at repair welds before temper bead technique is actually used in practice. Thus the appropriate welding procedure conditions in temper bead technique are determined on the basis of the property and performance qualification testing. It is necessary for certifying the structural soundness and reliability at repair welds but takes a lot of work and time in the present circumstances. Therefore it is desirable to establish the reasonable alternatives for qualifying the property and performance at repair welds. In this study, mathematical modeling and numerical simulation procedures were developed for predicting weld bead configuration and temperature distribution during multi-layer repair welding process by temper bead technique. In the developed simulation technique, characteristics of heat source in temper bead welding are calculated from weld heat input conditions through the arc plasma simulation and then weld bead configuration and temperature distribution during temper bead welding are calculated from characteristics of heat source obtained through the coupling analysis between bead surface shape and thermal conduction. The simulation results were compared with the experimental results under the same welding heat input conditions. As the results, the bead surface shape and temperature distribution, such as A cl lines, were in good agreement between simulation and experimental results. It was concluded that the developed simulation technique has the potential to become useful for

  1. Video: two novel endoscopic esophageal lengthening and reconstruction techniques.

    Science.gov (United States)

    Perretta, Silvana; Wall, James K; Dallemagne, Bernard; Harrison, Michael; Becmeur, François; Marescaux, Jacques

    2011-10-01

    Esophageal reconstruction presents a significant clinical challenge in patients ranging from neonates with long-gap esophageal atresia to adults after esophageal resection. Both gastric and colonic replacement conduits carry significant morbidity. As emerging organ-sparring techniques become established for early stage esophageal tumors, less morbid reconstruction techniques are warranted. We present two novel endoscopic approaches for esophageal lengthening and reconstruction in a porcine model. Two models of esophageal defects were created in pigs (30-35 kg) under general anesthesia and subsequently reconstructed with the novel techniques. The first model was a segmental defect of the esophagus created by thoracoscopically transecting the esophagus above the gastroesophageal (GE) junction. The first reconstruction technique involved bilateral submucosal endoscopic lengthening myotomies (BSELM) with a magnetic compression anastomosis (MAGNAMOSIS™). The second model was a wedge defect in the anterior esophagus created above the GE junction through a laparotomy. The second reconstruction technique involved an inverted mucosal-submucosal sleeve transposition graft (IMSTG) that crossed the esophageal gap and was secured in place with a self-expandable covered esophageal stent. Both techniques were feasible in the pig model. The BSELM approach lengthened the esophagus 1 cm for every 2 cm length of myotomy. The myotomy targeted only the inner circular fibers of the esophagus, with preservation of the longitudinal layer to protect against long-term dilation and pouching. The IMSTG approach generated a vascularized mucosal graft almost as long as the esophagus itself. Emerging endoscopic capabilities are enabling complex endoluminal esophageal procedures. BSELM and IMSTG are two novel and technically feasible approaches to esophageal lengthening and reconstruction. Further survival studies are needed to establish the safety and efficacy of these techniques.

  2. Electromagnetic modelling, inversion and data-processing techniques for GPR: ongoing activities in Working Group 3 of COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the

  3. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    Science.gov (United States)

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  4. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    Science.gov (United States)

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to

  5. New analytical technique for carbon dioxide absorption solvents

    Energy Technology Data Exchange (ETDEWEB)

    Pouryousefi, F.; Idem, R.O. [University of Regina, Regina, SK (Canada). Faculty of Engineering

    2008-02-15

    The densities and refractive indices of two binary systems (water + MEA and water + MDEA) and three ternary systems (water + MEA + CO{sub 2}, water + MDEA + CO{sub 2}, and water + MEA + MDEA) used for carbon dioxide (CO{sub 2}) capture were measured over the range of compositions of the aqueous alkanolamine(s) used for CO{sub 2} absorption at temperatures from 295 to 338 K. Experimental densities were modeled empirically, while the experimental refractive indices were modeled using well-established models from the known values of their pure-component densities and refractive indices. The density and Gladstone-Dale refractive index models were then used to obtain the compositions of unknown samples of the binary and ternary systems by simultaneous solution of the density and refractive index equations. The results from this technique have been compared with HPLC (high-performance liquid chromatography) results, while a third independent technique (acid-base titration) was used to verify the results. The results show that the systems' compositions obtained from the simple and easy-to-use refractive index/density technique were very comparable to the expensive and laborious HPLC/titration techniques, suggesting that the refractive index/density technique can be used to replace existing methods for analysis of fresh or nondegraded, CO{sub 2}-loaded, single and mixed alkanolamine solutions.

  6. Optimization-based human motion prediction using an inverse-inverse dynamics technique implemented in the AnyBody Modeling System

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi; Andersen, Michael Skipper; de Zee, Mark

    2012-01-01

    derived from the detailed musculoskeletal analysis. The technique is demonstrated on a human model pedaling a bicycle. We use a physiology-based cost function expressing the mean square of all muscle activities over the cycle to predict a realistic motion pattern. Posture and motion prediction...... on a physics model including dynamic effects and a high level of anatomical realism. First, a musculoskeletal model comprising several hundred muscles is built in AMS. The movement is then parameterized by means of time functions controlling selected degrees of freedom of the model. Subsequently......, the parameters of these functions are optimized to produce an optimum posture or movement according to a user-defined cost function and constraints. The cost function and the constraints are typically express performance, comfort, injury risk, fatigue, muscle load, joint forces and other physiological properties...

  7. [Impressions techniques--Part 2].

    Science.gov (United States)

    Levartovsky, S; Masri, M; Alter, E; Pilo, R

    2012-10-01

    A dental impression is a positive replica of the teeth, the surrounding gingiva and the border between them; the purpose of which is to create an accurate master model. Two major techniques for impressions exist today: The conventional and the digital impressions. The current article describes both techniques. In the conventional impressions, it is important to choose a proper tray, stock or custom, and to mix the material properly. The commonly used impression techniques for making a conventional impression are described with a review on the effect of the technique on its accuracy. The effect of the wash bulk on the accuracy of the stone dies and/or the restoration is discussed, as well. The digital impressions with their advantages and disadvantages are described in comparison to the conventional impressions. Although, digital impressions eliminate some of the negative characteristics of conventional impressions, proper soft-tissue management and isolation of tooth preparation margins is still mandatory.

  8. Stochastic Feedforward Control Technique

    Science.gov (United States)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  9. Telescopes and Techniques

    CERN Document Server

    Kitchin, C R

    2013-01-01

    Telescopes and Techniques has proved itself in its first two editions, having become probably one of the most widely used astronomy texts, both for amateur astronomers and astronomy and astrophysics undergraduates. Both earlier editions of the book were widely used for introductory practical astronomy courses in many universities. In this Third Edition the author guides the reader through the mathematics, physics and practical techniques needed to use today's telescopes (from the smaller models to the larger instruments installed in many colleges) and how to find objects in the sky. Most of the physics and engineering involved is described fully and requires little prior knowledge or experience. Both visual and electronic imaging techniques are covered, together with an introduction to how data (measurements) should be processed and analyzed. A simple introduction to radio telescopes is also included. Brief coverage of the more advanced topics of photometry and spectroscopy are included, but mainly to enable ...

  10. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  11. Effects of Different Missing Data Imputation Techniques on the Performance of Undiagnosed Diabetes Risk Prediction Models in a Mixed-Ancestry Population of South Africa.

    Directory of Open Access Journals (Sweden)

    Katya L Masconi

    Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.

  12. Simulation of the fissureless technique for thoracoscopic segmentectomy using rapid prototyping.

    Science.gov (United States)

    Akiba, Tadashi; Nakada, Takeo; Inagaki, Takuya

    2015-01-01

    The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest.

  13. Simulation of the Fissureless Technique for Thoracoscopic Segmentectomy Using Rapid Prototyping

    Science.gov (United States)

    Nakada, Takeo; Inagaki, Takuya

    2014-01-01

    The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest. PMID:24633132

  14. A simple highly accurate field-line mapping technique for three-dimensional Monte Carlo modeling of plasma edge transport

    International Nuclear Information System (INIS)

    Feng, Y.; Sardei, F.; Kisslinger, J.

    2005-01-01

    The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations

  15. Analytical model and error analysis of arbitrary phasing technique for bunch length measurement

    Science.gov (United States)

    Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji

    2018-05-01

    An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.

  16. Nonlinear dynamic macromodeling techniques for audio systems

    Science.gov (United States)

    Ogrodzki, Jan; Bieńkowski, Piotr

    2015-09-01

    This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.

  17. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    Science.gov (United States)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  18. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    International Nuclear Information System (INIS)

    Rasam, A R A; Ghazali, R; Noor, A M M; Mohd, W M N W; Hamid, J R A; Bazlan, M J; Ahmad, N

    2014-01-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia

  19. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed

    2016-04-22

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model\\'s output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions\\' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  20. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    Science.gov (United States)

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  1. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection

    Directory of Open Access Journals (Sweden)

    Declan T. Delaney

    2016-12-01

    Full Text Available No single network solution for Internet of Things (IoT networks can provide the required level of Quality of Service (QoS for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  2. Innovative application of virtual display technique in virtual museum

    Science.gov (United States)

    Zhang, Jiankang

    2017-09-01

    Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.

  3. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  4. Three-dimensional accuracy of different impression techniques for dental implants

    Directory of Open Access Journals (Sweden)

    Mohammadreza Nakhaei

    2015-01-01

    Full Text Available Background: Accurate impression making is an essential prerequisite for achieving a passive fit between the implant and the superstructure. The aim of this in vitro study was to compare the three-dimensional accuracy of open-tray and three closed-tray impression techniques. Materials and Methods: Three acrylic resin mandibular master models with four parallel implants were used: Biohorizons (BIO, Straumann tissue-level (STL, and Straumann bone-level (SBL. Forty-two putty/wash polyvinyl siloxane impressions of the models were made using open-tray and closed-tray techniques. Closed-tray impressions were made using snap-on (STL model, transfer coping (TC (BIO model and TC plus plastic cap (TC-Cap (SBL model. The impressions were poured with type IV stone, and the positional accuracy of the implant analog heads in each dimension (x, y and z axes, and the linear displacement (ΔR were evaluated using a coordinate measuring machine. Data were analyzed using ANOVA and post-hoc Tukey tests (α = 0.05. Results: The ΔR values of the snap-on technique were significantly lower than those of TC and TC-Cap techniques (P < 0.001. No significant differences were found between closed and open impression techniques for STL in Δx, Δy, Δz and ΔR values (P = 0.444, P = 0.181, P = 0.835 and P = 0.911, respectively. Conclusion: Considering the limitations of this study, the snap-on implant-level impression technique resulted in more three-dimensional accuracy than TC and TC-Cap, but it was similar to the open-tray technique.

  5. Development of evaluation method for software safety analysis techniques

    International Nuclear Information System (INIS)

    Huang, H.; Tu, W.; Shih, C.; Chen, C.; Yang, W.; Yih, S.; Kuo, C.; Chen, M.

    2006-01-01

    Full text: Full text: Following the massive adoption of digital Instrumentation and Control (I and C) system for nuclear power plant (NPP), various Software Safety Analysis (SSA) techniques are used to evaluate the NPP safety for adopting appropriate digital I and C system, and then to reduce risk to acceptable level. However, each technique has its specific advantage and disadvantage. If the two or more techniques can be complementarily incorporated, the SSA combination would be more acceptable. As a result, if proper evaluation criteria are available, the analyst can then choose appropriate technique combination to perform analysis on the basis of resources. This research evaluated the applicable software safety analysis techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flowgraph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/ noise ratio, complexity, and implementation cost. These indexes may help the decision makers and the software safety analysts to choose the best SSA combination arrange their own software safety plan. By this proposed method, the analysts can evaluate various SSA combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (without transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and Simulation-based model analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantage are the completeness complexity

  6. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  7. Artificial intelligence techniques in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Laughton, M.A.

    1997-12-31

    Since the early to mid 1980s much of the effort in power systems analysis has turned away from the methodology of formal mathematical modelling which came from the fields of operations research, control theory and numerical analysis to the less rigorous techniques of artificial intelligence (AI). Today the main AI techniques found in power systems applications are those utilising the logic and knowledge representations of expert systems, fuzzy systems, artificial neural networks (ANN) and, more recently, evolutionary computing. These techniques will be outlined in this chapter and the power system applications indicated. (Author)

  8. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Directory of Open Access Journals (Sweden)

    Cuong D. Tran

    2015-05-01

    Full Text Available It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease.

  9. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Science.gov (United States)

    Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.

    2015-01-01

    It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248

  10. Mathematical Modeling and Algebraic Technique for Resolving a Single-Producer Multi-Retailer Integrated Inventory System with Scrap

    OpenAIRE

    Yuan-Shyi Peter Chiu; Chien-Hua Lee; Nong Pan; Singa Wang Chiu

    2013-01-01

    This study uses mathematical modeling along with an algebraic technique to resolve the production-distribution policy for a single-producer multi-retailer integrated inventory system with scrap in production. We assume that a product is manufactured through an imperfect production process where all nonconforming items will be picked up and scrapped in each production cycle. After the entire lot is quality assured, multiple shipments will be delivered synchronously to m different retailers in ...

  11. Imaging techniques for visualizing and phenotyping congenital heart defects in murine models.

    Science.gov (United States)

    Liu, Xiaoqin; Tobita, Kimimasa; Francis, Richard J B; Lo, Cecilia W

    2013-06-01

    Mouse model is ideal for investigating the genetic and developmental etiology of congenital heart disease. However, cardiovascular phenotyping for the precise diagnosis of structural heart defects in mice remain challenging. With rapid advances in imaging techniques, there are now high throughput phenotyping tools available for the diagnosis of structural heart defects. In this review, we discuss the efficacy of four different imaging modalities for congenital heart disease diagnosis in fetal/neonatal mice, including noninvasive fetal echocardiography, micro-computed tomography (micro-CT), micro-magnetic resonance imaging (micro-MRI), and episcopic fluorescence image capture (EFIC) histopathology. The experience we have gained in the use of these imaging modalities in a large-scale mouse mutagenesis screen have validated their efficacy for congenital heart defect diagnosis in the tiny hearts of fetal and newborn mice. These cutting edge phenotyping tools will be invaluable for furthering our understanding of the developmental etiology of congenital heart disease. Copyright © 2013 Wiley Periodicals, Inc.

  12. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  13. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  14. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  15. Geochronological study of the Guanabara Bay (Rio de Janeiro State, Brazil) using 2'10 Pb dating technique and the constant rate of supply model

    International Nuclear Information System (INIS)

    Silva Braganca, Maura Julia Camara da; Oliveira Godoy, Jose Marcos de

    1995-01-01

    A geochronological study of the Guanabara Bay (RJ, Brazil) based on 210 Pb dating technique using the Constant Rate of Supply Model CRS is presented. A low energy gamma spectrometry ( 210 Pb for samples collected from Estrela and Sao Joao de Meriti rivers. Radiochemical method was applied to determine the amount of 210 Pb in samples from Guapimirim, Guaxindiba and Imbuacu rivers. Atomic absorption spectrometry with air-acetylene flame technique was used to determine the amount of copper in all the samples. The CRS model showed adequate in this estuarine system. (author). 19 refs., 5 figs., 6 tabs

  16. Application of predictive modelling techniques in industry: from food design up to risk assessment.

    Science.gov (United States)

    Membré, Jeanne-Marie; Lambert, Ronald J W

    2008-11-30

    In this communication, examples of applications of predictive microbiology in industrial contexts (i.e. Nestlé and Unilever) are presented which cover a range of applications in food safety from formulation and process design to consumer safety risk assessment. A tailor-made, private expert system, developed to support safe product/process design assessment is introduced as an example of how predictive models can be deployed for use by non-experts. Its use in conjunction with other tools and software available in the public domain is discussed. Specific applications of predictive microbiology techniques are presented relating to investigations of either growth or limits to growth with respect to product formulation or process conditions. An example of a probabilistic exposure assessment model for chilled food application is provided and its potential added value as a food safety management tool in an industrial context is weighed against its disadvantages. The role of predictive microbiology in the suite of tools available to food industry and some of its advantages and constraints are discussed.

  17. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  18. Thermal measurements and inverse techniques

    CERN Document Server

    Orlande, Helcio RB; Maillet, Denis; Cotta, Renato M

    2011-01-01

    With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe

  19. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed; Wang, Shitao; Srinivasan, Ashwanth; Carlisle Thacker, W.; Winokur, Justin; Knio, Omar

    2016-01-01

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model's output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  1. Fusion of neural computing and PLS techniques for load estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)

    2007-07-01

    A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.

  2. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  3. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  4. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  5. A comparison of force sensing techniques for planetary manipulation

    Science.gov (United States)

    Helmick, Daniel; Okon, Avi; DiCicco, Matt

    2006-01-01

    Five techniques for sensing forces with a manipulator are compared analytically and experimentally. The techniques compared are: a six-axis wrist force/torque sensor, joint torque sensors, link strain gauges, motor current sensors, and flexibility modeling. The accuracy and repeatability fo each technique is quantified and compared.

  6. Application of computer-aided three-dimensional skull model with rapid prototyping technique in repair of zygomatico-orbito-maxillary complex fracture.

    Science.gov (United States)

    Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu

    2009-06-01

    With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.

  7. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  8. Development and validation of brain and spinal cord vector and cell-delivery techniques in pre-clinical minipig models of neurodegenerative disorders

    Czech Academy of Sciences Publication Activity Database

    Juhás, Štefan; Juhásová, Jana; Klíma, Jiří; Maršala, M.; Maršala, S.; Atsushi, Y.; Johe, K.; Motlík, Jan

    2015-01-01

    Roč. 78, Suppl 2 (2015), s. 9-10 ISSN 1210-7859. [Conference on Animal Models for neurodegenerative Diseases /3./. 08.11.2015-10.11.2015, Liblice] R&D Projects: GA MŠk ED2.1.00/03.0124; GA MŠk(CZ) 7F14308 Institutional support: RVO:67985904 Keywords : minipig models of neurodegenerative disorders * brin and spinal cord cell delivery techniques Subject RIV: EB - Genetics ; Molecular Biology

  9. On the use of nudging techniques for regional climate modeling: application for tropical convection

    Science.gov (United States)

    Pohl, Benjamin; Crétat, Julien

    2014-09-01

    Using a large set of WRF ensemble simulations at 70-km horizontal resolution over a domain encompassing the Warm Pool region and its surroundings [45°N-45°S, 10°E-240°E], this study aims at quantifying how nudging techniques can modify the simulation of deep atmospheric convection. Both seasonal mean climate, transient variability at intraseasonal timescales, and the respective weight of internal (stochastic) and forced (reproducible) variability are considered. Sensitivity to a large variety of nudging settings (nudged variables and layers and nudging strength) and to the model physics (using 3 convective parameterizations) is addressed. Integrations are carried out during a 7-month season characterized by neutral background conditions and strong intraseasonal variability. Results show that (1) the model responds differently to the nudging from one parameterization to another. Biases are decreased by ~50 % for Betts-Miller-Janjic convection against 17 % only for Grell-Dévényi, the scheme producing yet the largest biases; (2) relaxing air temperature is the most efficient way to reduce biases, while nudging the wind increases most co-variability with daily observations; (3) the model's internal variability is drastically reduced and mostly depends on the nudging strength and nudged variables; (4) interrupting the relaxation before the end of the simulations leads to an abrupt convergence towards the model's natural solution, with no clear effects on the simulated climate after a few days. The usefulness and limitations of the approach are finally discussed through the example of the Madden-Julian Oscillation, that the model fails at simulating and that can be artificially and still imperfectly reproduced in relaxation experiments.

  10. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  11. Development of evaluation method for software hazard identification techniques

    International Nuclear Information System (INIS)

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-01-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  12. Advanced Techniques of Stress Analysis

    Directory of Open Access Journals (Sweden)

    Simion TATARU

    2013-12-01

    Full Text Available This article aims to check the stress analysis technique based on 3D models also making a comparison with the traditional technique which utilizes a model built directly into the stress analysis program. This comparison of the two methods will be made with reference to the rear fuselage of IAR-99 aircraft, structure with a high degree of complexity which allows a meaningful evaluation of both approaches. Three updated databases are envisaged: the database having the idealized model obtained using ANSYS and working directly on documentation, without automatic generation of nodes and elements (with few exceptions, the rear fuselage database (performed at this stage obtained with Pro/ ENGINEER and the one obtained by using ANSYS with the second database. Then, each of the three databases will be used according to arising necessities.The main objective is to develop the parameterized model of the rear fuselage using the computer aided design software Pro/ ENGINEER. A review of research regarding the use of virtual reality with the interactive analysis performed by the finite element method is made to show the state- of- the-art achieved in this field.

  13. Replacement Value - Representation of Fair Value in Accounting. Techniques and Modeling Suitable for the Income Based Approach

    OpenAIRE

    Manea Marinela – Daniela

    2011-01-01

    The term fair value is spread within the sphere of international standards without reference to any detailed guidance on how to apply. However, specialized tangible assets, which are rarely sold, the rule IAS 16 "Intangible assets " makes it possible to estimate fair value using an income approach or a replacement cost or depreciation. The following material is intended to identify potential modeling of fair value as an income-based approach, appealing to techniques used by professional evalu...

  14. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  15. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  16. Dynamic p-technique for modeling patterns of data: applications to pediatric psychology research.

    Science.gov (United States)

    Nelson, Timothy D; Aylward, Brandon S; Rausch, Joseph R

    2011-10-01

    Dynamic p-technique (DPT) is a potentially useful statistical method for examining relationships among dynamic constructs in a single individual or small group of individuals over time. The purpose of this article is to offer a nontechnical introduction to DPT. An overview of DPT analysis, with an emphasis on potential applications to pediatric psychology research, is provided. To illustrate how DPT might be applied, an example using simulated data is presented for daily pain and negative mood ratings. The simulated example demonstrates the application of DPT to a relevant pediatric psychology research area. In addition, the potential application of DPT to the longitudinal study of adherence is presented. Although it has not been utilized frequently within pediatric psychology, DPT could be particularly well-suited for research in this field because of its ability to powerfully model repeated observations from very small samples.

  17. Flow prediction models using macroclimatic variables and multivariate statistical techniques in the Cauca River Valley

    International Nuclear Information System (INIS)

    Carvajal Escobar Yesid; Munoz, Flor Matilde

    2007-01-01

    The project this centred in the revision of the state of the art of the ocean-atmospheric phenomena that you affect the Colombian hydrology especially The Phenomenon Enos that causes a socioeconomic impact of first order in our country, it has not been sufficiently studied; therefore it is important to approach the thematic one, including the variable macroclimates associated to the Enos in the analyses of water planning. The analyses include revision of statistical techniques of analysis of consistency of hydrological data with the objective of conforming a database of monthly flow of the river reliable and homogeneous Cauca. Statistical methods are used (Analysis of data multivariante) specifically The analysis of principal components to involve them in the development of models of prediction of flows monthly means in the river Cauca involving the Lineal focus as they are the model autoregressive AR, ARX and Armax and the focus non lineal Net Artificial Network.

  18. Experimental methods and modeling techniques for description of cell population heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita; Nierychlo, M.; Lundin, L.

    2011-01-01

    With the continuous development, in the last decades, of analytical techniques providing complex information at single cell level, the study of cell heterogeneity has been the focus of several research projects within analytical biotechnology. Nonetheless, the complex interplay between environmen......With the continuous development, in the last decades, of analytical techniques providing complex information at single cell level, the study of cell heterogeneity has been the focus of several research projects within analytical biotechnology. Nonetheless, the complex interplay between...

  19. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  20. Geostatistical and adjoint sensitivity techniques applied to a conceptual model of ground-water flow in the Paradox Basin, Utah

    International Nuclear Information System (INIS)

    Metcalfe, D.E.; Campbell, J.E.; RamaRao, B.S.; Harper, W.V.; Battelle Project Management Div., Columbus, OH)

    1985-01-01

    Sensitivity and uncertainty analysis are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of geostatistical and adjoint sensitivity techniques to aid in the calibration of an existing conceptual model of ground-water flow is demonstrated for the Leadville Limestone in Paradox Basin, Utah. The geostatistical method called kriging is used to statistically analyze the measured potentiometric data for the Leadville. This analysis consists of identifying anomalous data and data trends and characterizing the correlation structure between data points. Adjoint sensitivity analysis is then performed to aid in the calibration of a conceptual model of ground-water flow to the Leadville measured potentiometric data. Sensitivity derivatives of the fit between the modeled Leadville potentiometric surface and the measured potentiometric data to model parameters and boundary conditions are calculated by the adjoint method. These sensitivity derivatives are used to determine which model parameter and boundary condition values should be modified to most efficiently improve the fit of modeled to measured potentiometric conditions

  1. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  2. Optimization of the design of thick, segmented scintillators for megavoltage cone-beam CT using a novel, hybrid modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Langechuan; Antonuk, Larry E., E-mail: antonuk@umich.edu; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-06-15

    Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an

  3. Improving the performance of streamflow forecasting model using data-preprocessing technique in Dungun River Basin

    Science.gov (United States)

    Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd

    2018-03-01

    An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).

  4. Numerical and physical testing of upscaling techniques for constitutive properties

    International Nuclear Information System (INIS)

    McKenna, S.A.; Tidwell, V.C.

    1995-01-01

    This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

  5. Data mining techniques for thermophysical properties of refrigerants

    International Nuclear Information System (INIS)

    Kuecueksille, Ecir Ugur; Selbas, Resat; Sencan, Arzu

    2009-01-01

    This study presents ten modeling techniques within data mining process for the prediction of thermophysical properties of refrigerants (R134a, R404a, R407c and R410a). These are linear regression (LR), multi layer perception (MLP), pace regression (PR), simple linear regression (SLR), sequential minimal optimization (SMO), KStar, additive regression (AR), M5 model tree, decision table (DT), M5'Rules models. Relations depending on temperature and pressure were carried out for the determination of thermophysical properties as the specific heat capacity, viscosity, heat conduction coefficient, density of the refrigerants. Obtained model results for every refrigerant were compared and the best model was investigated. Results indicate that use of derived formulations from these techniques will facilitate design and optimize of heat exchangers which is component of especially vapor compression refrigeration system

  6. Rapid analysis of adulterations in Chinese lotus root powder (LRP) by near-infrared (NIR) spectroscopy coupled with chemometric class modeling techniques.

    Science.gov (United States)

    Xu, Lu; Shi, Peng-Tao; Ye, Zi-Hong; Yan, Si-Min; Yu, Xiao-Ping

    2013-12-01

    This paper develops a rapid analysis method for adulteration identification of a popular traditional Chinese food, lotus root powder (LRP), by near-infrared spectroscopy and chemometrics. 85 pure LRP samples were collected from 7 main lotus producing areas of China to include most if not all of the significant variations likely to be encountered in unknown authentic materials. To evaluate the model specificity, 80 adulterated LRP samples prepared by blending pure LRP with different levels of four cheaper and commonly used starches were measured and predicted. For multivariate quality models, two class modeling methods, the traditional soft independent modeling of class analogy (SIMCA) and a recently proposed partial least squares class model (PLSCM) were used. Different data preprocessing techniques, including smoothing, taking derivative and standard normal variate (SNV) transformation were used to improve the classification performance. The results indicate that smoothing, taking second-order derivatives and SNV can improve the class models by enhancing signal-to-noise ratio, reducing baseline and background shifts. The most accurate and stable models were obtained with SNV spectra for both SIMCA (sensitivity 0.909 and specificity 0.938) and PLSCM (sensitivity 0.909 and specificity 0.925). Moreover, both SIMCA and PLSCM could detect LRP samples mixed with 5% (w/w) or more other cheaper starches, including cassava, sweet potato, potato and maize starches. Although it is difficult to perform an exhaustive collection of all pure LRP samples and possible adulterations, NIR spectrometry combined with class modeling techniques provides a reliable and effective method to detect most of the current LRP adulterations in Chinese market. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Modeling FWM and impairments aware amplifiers placement technique for an optical MAN/WAN: Inline amplifiers case

    Science.gov (United States)

    Singh, Gurpreet; Singh, Maninder Lal

    2015-08-01

    A new four wave mixing (FWM) model for an optical network with amplifiers and a comparative analysis among three proposed amplifiers placement techniques have been presented in this paper. The FWM model is validated with the experimental measured data. The novelty of this model is its uniqueness that on direct substitutions of network parameters like length, it works even for unequal inter amplifier separations. The novelty of the analysis done among three schemes is that it presents fair choice of amplifiers placement methods for varied total system length. The appropriateness of these three schemes has been analyzed on the basis of critical system length, critical number of amplifiers and critical bit error rate (10-9) in presence of four wave mixing (FWM) and amplified spontaneous emission noise (ASE). The implementation of analysis done has been given with the help of an example of a regenerative metropolitan area network (MAN). The results suggest that the decreasing fiber section scheme should be avoided for placements of amplifiers and schemes IUFS and EFS shows their importance interchangeably for different set of parameters.

  8. Outcomes of laryngohyoid suspension techniques in an ovine model of profound oropharyngeal dysphagia.

    Science.gov (United States)

    Johnson, Christopher M; Venkatesan, Naren N; Siddiqui, M Tausif; Cates, Daniel J; Kuhn, Maggie A; Postma, Gregory M; Belafsky, Peter C

    2017-12-01

    To evaluate the efficacy of various techniques of laryngohyoid suspension in the elimination of aspiration utilizing a cadaveric ovine model of profound oropharyngeal dysphagia. Animal study. The head and neck of a Dorper cross ewe was placed in the lateral fluoroscopic view. Five conditions were tested: baseline, thyroid cartilage to hyoid approximation (THA), thyroid cartilage to hyoid to mandible (laryngohyoid) suspension (LHS), LHS with cricopharyngeus muscle myotomy (LHS-CPM), and cricopharyngeus muscle myotomy (CPM) alone. Five 20-mL trials of barium sulfate were delivered into the oropharynx under fluoroscopy for each condition. Outcome measures included the penetration aspiration scale (PAS) and the National Institutes of Health (NIH) Swallow Safety Scale (NIH-SSS). Median baseline PAS and NIH-SSS scores were 8 and 6, respectively, indicating severe impairment. THA scores were not improved from baseline. LHS alone reduced the PAS to 1 (P = .025) and NIH-SSS to 2 (P = .025) from baseline. LHS-CPM reduced the PAS to 1 (P = .025) and NIH-SSS to 0 (P = .025) from baseline. CPM alone did not improve scores. LHS-CPM displayed improved NIH-SSS over LHS alone (P = .003). This cadaveric model represents end-stage profound oropharyngeal dysphagia such as what could result from severe neurological insult. CPM alone failed to improve fluoroscopic outcomes in this model. Thyrohyoid approximation also failed to improve outcomes. LHS significantly improved both PAS and NIH-SSS. The addition of CPM to LHS resulted in improvement over suspension alone. NA. Laryngoscope, 127:E422-E427, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  9. A two-system, single-analysis, fluid-structure interaction technique for modelling abdominal aortic aneurysms.

    Science.gov (United States)

    Kelly, S C; O'Rourke, M J

    2010-01-01

    This work reports on the implementation and validation of a two-system, single-analysis, fluid-structure interaction (FSI) technique that uses the finite volume (FV) method for performing simulations on abdominal aortic aneurysm (AAA) geometries. This FSI technique, which was implemented in OpenFOAM, included fluid and solid mesh motion and incorporated a non-linear material model to represent AAA tissue. Fully implicit coupling was implemented, ensuring that both the fluid and solid domains reached convergence within each time step. The fluid and solid parts of the FSI code were validated independently through comparison with experimental data, before performing a complete FSI simulation on an idealized AAA geometry. Results from the FSI simulation showed that a vortex formed at the proximal end of the aneurysm during systolic acceleration, and moved towards the distal end of the aneurysm during diastole. Wall shear stress (WSS) values were found to peak at both the proximal and distal ends of the aneurysm and remain low along the centre of the aneurysm. The maximum von Mises stress in the aneurysm wall was found to be 408kPa, and this occurred at the proximal end of the aneurysm, while the maximum displacement of 2.31 mm occurred in the centre of the aneurysm. These results were found to be consistent with results from other FSI studies in the literature.

  10. Simulation tools for industrial applications of phased array inspection techniques

    International Nuclear Information System (INIS)

    Mahaut, St.; Roy, O.; Chatillon, S.; Calmon, P.

    2001-01-01

    Ultrasonic phased arrays techniques have been developed at the French Atomic Energy Commission in order to improve defects characterization and adaptability to various inspection configuration (complex geometry specimen). Such transducers allow 'standard' techniques - adjustable beam-steering and focusing -, or more 'advanced' techniques - self-focusing on defects for instance -. To estimate the performances of those techniques, models have been developed, which allows to compute the ultrasonic field radiated by an arbitrary phased array transducer through any complex specimen, and to predict the ultrasonic response of various defects inspected with a known beam. Both modeling applications are gathered in the Civa software, dedicated to NDT expertise. The use of those complementary models allows to evaluate the ability of a phased array to steer and focus the ultrasonic beam, and therefore its relevancy to detect and characterize defects. These models are specifically developed to give accurate solutions to realistic inspection applications. This paper briefly describes the CIVA models, and presents some applications dedicated to the inspection of complex specimen containing various defects with a phased array used to steer and focus the beam. Defect detection and characterization performances are discussed for the various configurations. Some experimental validation of both models are also presented. (authors)

  11. Quantitative assessments of distributed systems methodologies and techniques

    CERN Document Server

    Bruneo, Dario

    2015-01-01

    Distributed systems employed in critical infrastructures must fulfill dependability, timeliness, and performance specifications. Since these systems most often operate in an unpredictable environment, their design and maintenance require quantitative evaluation of deterministic and probabilistic timed models. This need gave birth to an abundant literature devoted to formal modeling languages combined with analytical and simulative solution techniques The aim of the book is to provide an overview of techniques and methodologies dealing with such specific issues in the context of distributed

  12. Modeling rainfall-runoff process using soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  13. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  14. NEW TECHNIQUES APPLIED IN ECONOMICS. ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Constantin Ilie

    2009-05-01

    Full Text Available The present paper has the objective to inform the public regarding the use of new techniques for the modeling, simulate and forecast of system from different field of activity. One of those techniques is Artificial Neural Network, one of the artificial in

  15. Experimental Investigation of Aeroelastic Deformation of Slender Wings at Supersonic Speeds Using a Video Model Deformation Measurement Technique

    Science.gov (United States)

    Erickson, Gary E.

    2013-01-01

    A video-based photogrammetric model deformation system was established as a dedicated optical measurement technique at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. This system was used to measure the wing twist due to aerodynamic loads of two supersonic commercial transport airplane models with identical outer mold lines but different aeroelastic properties. One model featured wings with deflectable leading- and trailing-edge flaps and internal channels to accommodate static pressure tube instrumentation. The wings of the second model were of single-piece construction without flaps or internal channels. The testing was performed at Mach numbers from 1.6 to 2.7, unit Reynolds numbers of 1.0 million to 5.0 million, and angles of attack from -4 degrees to +10 degrees. The video model deformation system quantified the wing aeroelastic response to changes in the Mach number, Reynolds number concurrent with dynamic pressure, and angle of attack and effectively captured the differences in the wing twist characteristics between the two test articles.

  16. Development of a morphology-based modeling technique for tracking solid-body displacements: examining the reliability of a potential MRI-only approach for joint kinematics assessment

    International Nuclear Information System (INIS)

    Mahato, Niladri K.; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian

    2016-01-01

    Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T 1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T 1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4

  17. Development of a morphology-based modeling technique for tracking solid-body displacements: examining the reliability of a potential MRI-only approach for joint kinematics assessment.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian

    2016-05-18

    Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4

  18. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  19. Techniques for asynchronous and periodically-synchronous coupling of atmosphere and ocean models. Pt. 1. General strategy and application to the cyclo-stationary case

    Energy Technology Data Exchange (ETDEWEB)

    Sausen, R [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Voss, R [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany)

    1995-07-01

    Asynchronous and periodically-synchronous schemes for coupling atmosphere and ocean models are presented. The performance of the schemes is tested by simulating the climatic response to a step function forcing and to a gradually increasing forcing with a simple zero-dimensional non-linear energy balance model. Both the initial transient response and the asymptotic approach of the equilibrium state are studied. If no annual cycle is allowed the asynchronous coupling technique proves to be a suitable tool. However, if the annual cycle is retained, the periodically-synchronous coupling technique reproduces the results of the synchronously coupled runs with smaller bias. In this case it is important that the total length of one synchronous period and one ocean only period is not a multiple of 6 months. (orig.)

  20. Dynamic P-Technique for Modeling Patterns of Data: Applications to Pediatric Psychology Research

    Science.gov (United States)

    Aylward, Brandon S.; Rausch, Joseph R.

    2011-01-01

    Objective Dynamic p-technique (DPT) is a potentially useful statistical method for examining relationships among dynamic constructs in a single individual or small group of individuals over time. The purpose of this article is to offer a nontechnical introduction to DPT. Method An overview of DPT analysis, with an emphasis on potential applications to pediatric psychology research, is provided. To illustrate how DPT might be applied, an example using simulated data is presented for daily pain and negative mood ratings. Results The simulated example demonstrates the application of DPT to a relevant pediatric psychology research area. In addition, the potential application of DPT to the longitudinal study of adherence is presented. Conclusion Although it has not been utilized frequently within pediatric psychology, DPT could be particularly well-suited for research in this field because of its ability to powerfully model repeated observations from very small samples. PMID:21486938