WorldWideScience

Sample records for modeling technique sadmt

  1. SAGEN (SADMT (Strategic Defense Initiative Architecture Dataflow Modeling Technique) Generator) User’s Guide Version. 1.5.

    Science.gov (United States)

    1988-04-01

    L0425 P.O. Box 179 Denver, CO 80201 Larry L. Lehman Integrated Systems Inc. 2500 Mission College Road Santa Clara, CA 95054 Eric Leighninger Dynamics...Minister Blvd. ." , Seal Beach, CA 90740-7644 Larry Tubbs % US Army Strategic Defense Command DASH-H-5B - 106 Wynn Dr. Huntsville, AL 35807 -11 pp...Communication Systems P.O. Box 1260 Denver, CO 80201-1260 CSED Review Panel Dr. Dan Alpert , Director 1 copy ,- Center for Advanced Study University of

  2. A Simple Example of an SADMT (SDI-Strategic Defense Initiative) Architecture Dataflow Modeling Technique) Architecture Specification. Version 1.5.

    Science.gov (United States)

    1988-04-21

    Layton Senior Software Engineer Martin Marietta Denver Aerospace MS L0425 P.O. Box 179 Denver, CO 80201 Larry L. Lehman Integrated Systems Inc. 2500...Town Rockwell International Corp. 2600 ’Nest Minister Blvd. Seal Beach, CA 90740-7644 4... Larry Tubbs US Army Strategic Defense Command DASH-H-5B 106...Director Martin Marietta Information & Communication Systems P.O. Box 1260,-, Denver, CO 80201-1260 CSED Review Panel • Dr. Dan Alpert , Director 1

  3. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  4. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  5. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  6. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  7. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  8. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  9. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  10. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  11. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  12. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  13. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  14. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  15. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  16. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  17. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  18. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  19. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  20. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  1. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  2. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  3. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  4. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  5. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  6. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  7. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  8. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  9. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  10. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  11. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  12. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  13. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  14. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  15. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  16. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  17. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  18. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  19. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  20. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  1. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  2. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  3. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  4. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  5. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  6. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  7. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  8. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  9. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  10. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  11. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  12. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  13. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  14. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  15. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  16. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  17. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  18. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  19. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  20. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  1. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  2. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  3. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  4. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  5. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  6. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  7. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  8. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  9. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  10. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  11. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  12. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  13. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  14. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  15. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  16. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  17. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  18. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  19. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  20. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  1. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  2. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  3. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  4. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  5. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  6. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  7. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  8. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  9. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  10. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  11. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  12. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  13. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  14. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  15. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  16. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  17. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  18. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  19. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  20. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  1. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  2. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  3. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  4. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  5. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  6. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  7. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  8. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  9. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  10. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  11. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  12. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  13. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  14. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  15. Study of Semi-Span Model Testing Techniques

    Science.gov (United States)

    Gatlin, Gregory M.; McGhee, Robert J.

    1996-01-01

    An investigation has been conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel in order to further the development of semi-span testing capabilities. A twin engine, energy efficient transport (EET) model with a four-element wing in a takeoff configuration was used for this investigation. Initially a full span configuration was tested and force and moment data, wing and fuselage surface pressure data, and fuselage boundary layer measurements were obtained as a baseline data set. The semi-span configurations were then mounted on the wind tunnel floor, and the effects of fuselage standoff height and shape as well as the effects of the tunnel floor boundary layer height were investigated. The effectiveness of tangential blowing at the standoff/floor juncture as an active boundary-layer control technique was also studied. Results indicate that the semi-span configuration was more sensitive to variations in standoff height than to variations in floor boundary layer height. A standoff height equivalent to 30 percent of the fuselage radius resulted in better correlation with full span data than no standoff or the larger standoff configurations investigated. Undercut standoff leading edges or the use of tangential blowing in the standoff/ floor juncture improved correlation of semi-span data with full span data in the region of maximum lift coefficient.

  16. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  17. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  18. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  19. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  20. Establishment of C6 brain glioma models through stereotactic technique for laser interstitial thermotherapy research

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2015-01-01

    Conclusion: The rat C6 brain glioma model established in the study was a perfect model to study LITT of glioma. Infrared thermograph technique measured temperature conveniently and effectively. The technique is noninvasive, and the obtained data could be further processed using software used in LITT research. To measure deep-tissue temperature, combining thermocouple with infrared thermograph technique would present better results.

  1. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  2. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  3. New Developments and Techniques in Structural Equation Modeling

    CERN Document Server

    Marcoulides, George A

    2001-01-01

    Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent varia

  4. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  5. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  6. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  7. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  8. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  9. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  10. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  11. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  12. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  13. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  14. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  15. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  16. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  17. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  18. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  19. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  20. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  1. Wave Propagation in Fluids Models and Numerical Techniques

    CERN Document Server

    Guinot, Vincent

    2007-01-01

    This book presents the physical principles of wave propagation in fluid mechanics and hydraulics. The mathematical techniques that allow the behavior of the waves to be analyzed are presented, along with existing numerical methods for the simulation of wave propagation. Particular attention is paid to discontinuous flows, such as steep fronts and shock waves, and their mathematical treatment. A number of practical examples are taken from various areas fluid mechanics and hydraulics, such as contaminant transport, the motion of immiscible hydrocarbons in aquifers, river flow, pipe transients an

  2. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  3. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  4. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  5. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  6. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  7. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  8. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  9. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  10. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  11. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  12. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  13. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  14. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  15. Generalization Technique for 2D+SCALE Dhe Data Model

    Science.gov (United States)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.

  16. A modeling technique for STOVL ejector and volume dynamics

    Science.gov (United States)

    Drummond, C. K.; Barankiewicz, W. S.

    1990-01-01

    New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.

  17. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power Electronics,"…

  18. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  19. Application of Yamaguchi's technique for the Rescorla-Wagner model.

    Science.gov (United States)

    Yamaguchi, Makoto

    2007-12-01

    Yamaguchi in 2006 solved for the first time a problem concerning a 1972 mathematical model of classical conditioning by Rescorla and Wagner. That derivation is not an isolated contribution. Here it is shown that the same line of derivation can be successfully applied to another experimental situation involving more stimuli.

  20. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  1. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  2. A review of propeller modelling techniques based on Euler methods

    NARCIS (Netherlands)

    Zondervan, G.J.D.

    1998-01-01

    Future generation civil aircraft will be powered by new, highly efficient propeller propulsion systems. New, advanced design tools like Euler methods will be needed in the design process of these aircraft. This report describes the application of Euler methods to the modelling of flowfields generate

  3. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  4. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  5. A Memory Insensitive Technique for Large Model Simplification

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Silva, C

    2001-08-07

    In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems. The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses. Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the ''representative vertex'' of a grid cell to an optimal position inside the cell.

  6. A Model-Following Technique for Insensitive Aircraft Control Systems.

    Science.gov (United States)

    1981-01-01

    Harvey and Pope(131 and Vinkler[301 compared several different methods in their works, while Shenkar [261 and Ashkenazi[2i extended the most promising...Following for In- sensitive Control works, let us consider the simple, first-order system used by Shenkar [261. The plant is described by x -(1 + Ar)x + u...representative of the methods of Vinkler, Asikenazi, and Shenkar ), and Model Following for Insensitive Control (MrIC). For the LQR design, we assume that our

  7. IMPROVED SOFTWARE QUALITY ASSURANCE TECHNIQUES USING SAFE GROWTH MODEL

    Directory of Open Access Journals (Sweden)

    M.Sangeetha

    2010-09-01

    Full Text Available In our lives are governed by large, complex systems with increasingly complex software, and the safety, security, and reliability of these systems has become a major concern. As the software in today’ssystems grows larger, it has more defects, and these defects adversely affect the safety, security, and reliability of the systems. Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, andmaintenance of software. Software divides into two pieces: internal and external quality characteristics.External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.Quality is conformance to product requirements and should be free. This research concerns the role of software Quality. Software reliability is an important facet of software quality. It is the probability of failure-freeoperation of a computer program in a specified environment for a specified time. In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component. However, the widely used point estimatorsare subject to random variations in the data, resulting in uncertainties in these estimated parameters. This research describes a new approach to the problem of software testing. The approach is based on Bayesian graphical models and presents formal mechanisms forthe logical structuring of the software testing problem, the probabilistic and statistical treatment of the uncertainties to be addressed, the test design and analysis process, and the incorporation and implication of test results. Once constructed, the models produced are dynamic representations of the software testingproblem. It explains need of the common test-and-fix software quality strategy is no longer adequate, and characterizes the properties of the quality strategy.

  8. Modeling and Analyzing Terrain Data Acquired by Modern Mapping Techniques

    Science.gov (United States)

    2009-09-22

    enhanced by new terrain mapping technologies such as Laser altimetry (LIDAR), ground based laser scanning and Real Time Kinematic GPS ( RTK - GPS ) that...developed and implemented an approach that has the following features: it is modular so that a user can use different models for each of the modules ...support some way of connecting separate modules together to form pipelines, however this requires manual intervention. While a typical GIS can manage

  9. Groundwater Resources Assessment For Joypurhat District Using Mathematical Modelling Technique

    Directory of Open Access Journals (Sweden)

    Md. Iquebal Hossain

    2015-06-01

    Full Text Available In this study potential recharge as well as groundwater availability for 5 Upazillas (Akkelpur, Kalai, Joypurhat Sadar, Khetlal and Panchbibi of Joypurhat districts has been estimated using MIKE SHE modelling tools. The main aquifers of the study area are dominated by medium sands, medium and coarse sands with little gravels. The top of aquifers ranges from 15 m to 24 m and the screenable thickness of aquifers range from 33 m to 46 m within the depth range from 57 m to 87 m. Heavy abstraction of groundwater for agricultural, industrial and domestic uses results in excessive lowering of water table making the shallow and hand tubewells inoperable in the dry season. The upazilawise potential recharge for the study area was estimated through mathematical model using MIKE SHE modelling tools in an integrated approach. The required data were collected from the different relevant organisations. The potential recharge of the present study varies from 452 mm to 793 mm. Maximum depth to groundwater table in most of the places occurs at the end of April. At this time, groundwater table in most of the part of Kalai, Khetlal, Akkelpur and Panchbibi goes below suction limit causing HTWs and STWs partially/fully in operable.

  10. An improved calibration technique for wind tunnel model attitude sensors

    Science.gov (United States)

    Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping

    1993-01-01

    Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.

  11. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; ptechnique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was

  12. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  13. Computable General Equilibrium Techniques for Carbon Tax Modeling

    Directory of Open Access Journals (Sweden)

    Al-Amin

    2009-01-01

    Full Text Available Problem statement: Lacking of proper environmental models environmental pollution is now a solemn problem in many developing countries particularly in Malaysia. Some empirical studies of worldwide reveal that imposition of a carbon tax significantly decreases carbon emissions and does not dramatically reduce economic growth. To our knowledge there has not been any research done to simulate the economic impact of emission control policies in Malaysia. Approach: Therefore this study developed an environmental computable general equilibrium model for Malaysia and investigated carbon tax policy responses in the economy applying exogenously different degrees of carbon tax into the model. Three simulations were carried out using a Malaysian social accounting matrix. Results: The carbon tax policy illustrated that a 1.21% reduction of carbon emission reduced the nominal GDP by 0.82% and exports by 2.08%; 2.34% reduction of carbon emission reduced the nominal GDP by 1.90% and exports by 3.97% and 3.40% reduction of carbon emission reduced the nominal GDP by 3.17% and exports by 5.71%. Conclusion/Recommendations: Imposition of successively higher carbon tax results in increased government revenue from baseline by 26.67, 53.07 and 79.28% respectively. However, fixed capital investment increased in scenario 1a by 0.43% and decreased in scenarios 1b and 1c by 0.26 and 1.79% respectively from the baseline. According to our policy findings policy makers should consider 1st (scenario 1a carbon tax policy. This policy results in achieving reasonably good environmental impacts without losing the investment, fixed capital investment, investment share of nominal GDP and government revenue.

  14. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  15. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  16. Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction

    CERN Document Server

    Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu

    2015-01-01

    Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

  17. Modelling and Design of a Microstrip Band-Pass Filter Using Space Mapping Techniques

    CERN Document Server

    Tavakoli, Saeed; Mohanna, Shahram

    2010-01-01

    Determination of design parameters based on electromagnetic simulations of microwave circuits is an iterative and often time-consuming procedure. Space mapping is a powerful technique to optimize such complex models by efficiently substituting accurate but expensive electromagnetic models, fine models, with fast and approximate models, coarse models. In this paper, we apply two space mapping, an explicit space mapping as well as an implicit and response residual space mapping, techniques to a case study application, a microstrip band-pass filter. First, we model the case study application and optimize its design parameters, using explicit space mapping modelling approach. Then, we use implicit and response residual space mapping approach to optimize the filter's design parameters. Finally, the performance of each design methods is evaluated. It is shown that the use of above-mentioned techniques leads to achieving satisfactory design solutions with a minimum number of computationally expensive fine model eval...

  18. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.

    Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  19. Application of Krylov Reduction Technique for a Machine Tool Multibody Modelling

    Directory of Open Access Journals (Sweden)

    M. Sulitka

    2014-02-01

    Full Text Available Quick calculation of machine tool dynamic response represents one of the major requirements for machine tool virtual modelling and virtual machining, aiming at simulating the machining process performance, quality, and precision of a workpiece. Enhanced time effectiveness in machine tool dynamic simulations may be achieved by employing model order reduction (MOR techniques of the full finite element (FE models. The paper provides a case study aimed at comparison of Krylov subspace base and mode truncation technique. Application of both of the reduction techniques for creating a machine tool multibody model is evaluated. The Krylov subspace reduction technique shows high quality in terms of both dynamic properties of the reduced multibody model and very low time demands at the same time.

  20. Multivariate moment closure techniques for stochastic kinetic models

    Science.gov (United States)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-09-01

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  1. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  2. A MODEL FOR OVERLAPPING TRIGRAM TECHNIQUE FOR TELUGU SCRIPT

    Directory of Open Access Journals (Sweden)

    B.Vishnu Vardhan

    2007-09-01

    Full Text Available N-grams are consecutive overlapping N-character sequences formed from an input stream. N-grams are used as alternatives to word-based retrieval in a number of systems. In this paper we propose a model applicable to categorization of Telugu document. Telugu is an official language derived from ancient Brahmi script and also the official language of the state of Andhra Pradesh. Brahmi based script is noted for complex conjunct formations. The canonical structure is described as ((C C CV. The structure evolves any character from a set of basic syllables known as vowels and consonants where consonant, vowel (CV core is the basic unit optionally preceded by one or two consonants. A huge set of characters that resemble the phonetic nature with an equivalent character shape are derived from the canonical structure. Words formed from this set evolved into a large corpus. Stringent grammar rules in word formation are part of this corpus. Certain word combinations result in the formation of single word is to be addressed where the last character of the first word and first character of the successive word are combined. Keeping in view of these complexities we propose a trigram based system that provides a reasonable alternative to a word based system in achieving document categorization for the language Telugu.

  3. Multivariate moment closure techniques for stochastic kinetic models.

    Science.gov (United States)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D W; Stumpf, Michael P H

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  4. Energy iteration model research of DCM Buck converter with multilevel pulse train technique

    Science.gov (United States)

    Qin, Ming; Li, Xiang

    2017-08-01

    According as the essence of switching converter is the nature of energy, the energy iteration model of the Multilevel Pulse Train (MPT) technique is studied in this paper. The energy iteration model of DCM Buck converter with MPT technique can reflect the control law and excellent transient performance of the MPT technique. The iteration relation of energy transfer in switching converter is discussed. The structure and operation principle of DCM Buck converter with MPT technique is introduced and the energy iteration model of this converter is set up. The energy tracks of MPT-control Buck converter and PT converter is researched and compared to show that the ratio of steady-state control pulse satisfies the expectation for the MPT technique and the MPT-controlled switching converter has much lower output voltage ripple than the PT converter.

  5. Developing a Teaching Model Using an Online Collaboration Approach for a Digital Technique Practical Work

    Science.gov (United States)

    Muchlas

    2015-01-01

    This research is aimed to produce a teaching model and its supporting instruments using a collaboration approach for a digital technique practical work attended by higher education students. The model is found to be flexible and relatively low cost. Through this research, feasibility and learning impact of the model will be determined. The model…

  6. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  7. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  8. INTELLIGENT CAR STYLING TECHNIQUE AND SYSTEM BASED ON A NEW AERODYNAMIC-THEORETICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Car styling technique based on a new theoretical model of automotive aerodynamics is introduced, which is proved to be feasible and effective by wind tunnel tests. Development of a multi-module software system from this technique, including modules of knowledge processing, referential styling and ANN aesthetic evaluation etc, capable of assisting car styling works in an intelligent way, is also presented and discussed.

  9. Optimizaton of corrosion control for lead in drinking water using computational modeling techniques

    Science.gov (United States)

    Computational modeling techniques have been used to very good effect in the UK in the optimization of corrosion control for lead in drinking water. A “proof-of-concept” project with three US/CA case studies sought to demonstrate that such techniques could work equally well in the...

  10. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  11. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Shahid Ali

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  12. Geomatic techniques for the generation of building information models towards their introduction in Integrated Management Systems

    OpenAIRE

    Diaz Vilariño, Lucia

    2015-01-01

    This research project proposes the use of geomatic techniques to reconstruct in a highly automated way semantic building models that might be subjected to energy analysis. Other non-destructive techniques such as infrared thermography are explored to obtain descriptive attributes for enriching the models. Building stock is considered as an important contributor to the global energy consumption and buildings energy efficiency has become a priority strategy in the European energy policy. Bu...

  13. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  14. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    Science.gov (United States)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  15. THE IMPROVEMENT OF THE COMPUTATIONAL PERFORMANCE OF THE ZONAL MODEL POMA USING PARALLEL TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Yao Yu

    2014-01-01

    Full Text Available The zonal modeling approach is a new simplified computational method used to predict temperature distribution, energy in multi-zone building and indoor airflow thermal behaviors of building. Although this approach is known to use less computer resource than CFD models, the computational time is still an issue especially when buildings are characterized by complicated geometry and indoor layout of furnishings. Therefore, using a new computing technique to the current zonal models in order to reduce the computational time is a promising way to further improve the model performance and promote the wide application of zonal models. Parallel computing techniques provide a way to accomplish these purposes. Unlike the serial computations that are commonly used in the current zonal models, these parallel techniques decompose the serial program into several discrete instructions which can be executed simultaneously on different processors/threads. As a result, the computational time of the parallelized program can be significantly reduced, compared to that of the traditional serial program. In this article, a parallel computing technique, Open Multi-Processing (OpenMP, is used into the zonal model, Pressurized zOnal Model with the Air diffuser (POMA, in order to improve the model computational performance, including the reduction of computational time and the investigation of the model scalability.

  16. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  17. Research on the Propagation Models and Defense Techniques of Internet Worms

    Institute of Scientific and Technical Information of China (English)

    Tian-Yun Huang

    2008-01-01

    Internet worm is harmful to network security, and it has become a research hotspot in recent years. A thorough survey on the propagation models and defense techniques of Internet worm is made in this paper. We first give its strict definition and discuss the working mechanism. We then analyze and compare some repre sentative worm propagation models proposed in recent years, such as K-M model, two-factor model, worm-anti worm model (WAW), firewall-based model, quarantine based model and hybrid benign worm-based model, etc. Some typical defense techniques such as virtual honeypot, active worm prevention and agent-oriented worm defense, etc, are also discussed. The future direction of the worm defense system is pointed out.

  18. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  19. Design of Current Controller for Two Quadrant DC Motor Drive by Using Model Order Reduction Technique

    CERN Document Server

    Ramesh, K; Nirmalkumar, A; Gurusamy, G

    2010-01-01

    In this paper, design of current controller for a two quadrant DC motor drive was proposed with the help of model order reduction technique. The calculation of current controller gain with some approximations in the conventional design process is replaced by proposed model order reduction method. The model order reduction technique proposed in this paper gives the better controller gain value for the DC motor drive. The proposed model order reduction method is a mixed method, where the numerator polynomial of reduced order model is obtained by using stability equation method and the denominator polynomial is obtained by using some approximation technique preceded in this paper. The designed controllers responses were simulated with the help of MATLAB to show the validity of the proposed method.

  20. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  1. Maternal, Infant Characteristics, Breastfeeding Techniques, and Initiation: Structural Equation Modeling Approaches

    OpenAIRE

    2015-01-01

    Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool ...

  2. An EMG-assisted model calibration technique that does not require MVCs.

    Science.gov (United States)

    Dufour, Jonathan S; Marras, William S; Knapik, Gregory G

    2013-06-01

    As personalized biologically-assisted models of the spine have evolved, the normalization of raw electromyographic (EMG) signals has become increasingly important. The traditional method of normalizing myoelectric signals, relative to measured maximum voluntary contractions (MVCs), is susceptible to error and is problematic for evaluating symptomatic low back pain (LBP) patients. Additionally, efforts to circumvent MVCs have not been validated during complex free-dynamic exertions. Therefore, the objective of this study was to develop an MVC-independent biologically-assisted model calibration technique that overcomes the limitations of previous normalization efforts, and to validate this technique over a variety of complex free-dynamic conditions including symmetrical and asymmetrical lifting. The newly developed technique (non-MVC) eliminates the need to collect MVCs by combining gain (maximum strength per unit area) and MVC into a single muscle property (gain ratio) that can be determined during model calibration. Ten subjects (five male, five female) were evaluated to compare gain ratio prediction variability, spinal load predictions, and model fidelity between the new non-MVC and established MVC-based model calibration techniques. The new non-MVC model calibration technique demonstrated at least as low gain ratio prediction variability, similar spinal loads, and similar model fidelity when compared to the MVC-based technique, indicating that it is a valid alternative to traditional MVC-based EMG normalization. Spinal loading for individuals who are unwilling or unable to produce reliable MVCs can now be evaluated. In particular, this technique will be valuable for evaluating symptomatic LBP patients, which may provide significant insight into the underlying nature of the LBP disorder.

  3. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  4. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  5. Models and techniques for evaluating the effectiveness of aircraft computing systems

    Science.gov (United States)

    Meyer, J. F.

    1978-01-01

    Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.

  6. Applying Modern Techniques and Carrying Out English .Extracurricular—— On the Model United Nations Activity

    Institute of Scientific and Technical Information of China (English)

    XuXiaoyu; WangJian

    2004-01-01

    This paper is an introduction of the extracurricular activity of the Model United Nations in Northwestern Polyteehnical University (NPU) and it focuses on the application of the modem techniques in the activity and the pedagogical theories applied in it. An interview and questionnaire research will reveal the influence of the Model United Nations.

  7. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  8. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  9. Accuracy and reproducibility of dental replica models reconstructed by different rapid prototyping techniques

    NARCIS (Netherlands)

    Hazeveld, Aletta; Huddleston Slater, James J. R.; Ren, Yijin

    INTRODUCTION: Rapid prototyping is a fast-developing technique that might play a significant role in the eventual replacement of plaster dental models. The aim of this study was to investigate the accuracy and reproducibility of physical dental models reconstructed from digital data by several rapid

  10. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, Petru [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States) and Electrical and Computer Engineering Department, Florida A and M Unviersity, Tallahassee, FL 32310 (United States)]. E-mail: pandrei@eng.fsu.edu; Oniciuc, Liviu [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States); Stancu, Alexandru [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania); Stoleriu, Laurentiu [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania)

    2007-09-15

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented.

  11. Application of a systematic finite-element model modification technique to dynamic analysis of structures

    Science.gov (United States)

    Robinson, J. C.

    1982-01-01

    A systematic finite-element model modification technique has been applied to two small problems and a model of the main wing box of a research drone aircraft. The procedure determines the sensitivity of the eigenvalues and eigenvector components to specific structural changes, calculates the required changes and modifies the finite-element model. Good results were obtained where large stiffness modifications were required to satisfy large eigenvalue changes. Sensitivity matrix conditioning problems required the development of techniques to insure existence of a solution and accelerate its convergence. A method is proposed to assist the analyst in selecting stiffness parameters for modification.

  12. Non-linear control logics for vibrations suppression: a comparison between model-based and non-model-based techniques

    Science.gov (United States)

    Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio

    2015-04-01

    Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.

  13. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    Science.gov (United States)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  14. An experimental comparison of modelling techniques for speaker recognition under limited data condition

    Indian Academy of Sciences (India)

    H S Jayanna; S R Mahadeva Prasanna

    2009-10-01

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization (CVQ), Fuzzy Vector Quantization (FVQ), Self-Organizing Map (SOM), Learning Vector Quantization (LVQ), and Gaussian Mixture Model (GMM) classifiers. An experimental evaluation of the most widely used Gaussian Mixture Model–Universal Background Model (GMM–UBM) is also made. The experimental knowledge is then used to select a subset of classifiers for obtaining the combined classifiers. It is proposed that the combined LVQ and GMM–UBM classifier provides relatively better performance compared to all the individual as well as combined classifiers.

  15. A quantitative comparison of the TERA modeling and DFT magnetic resonance image reconstruction techniques.

    Science.gov (United States)

    Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M

    1991-05-01

    The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.

  16. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  17. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM......-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...

  18. Q-DPM: An Efficient Model-Free Dynamic Power Management Technique

    CERN Document Server

    Li, Min; Yao, Richard; Yan, Xiaolang

    2011-01-01

    When applying Dynamic Power Management (DPM) technique to pervasively deployed embedded systems, the technique needs to be very efficient so that it is feasible to implement the technique on low end processor and tight-budget memory. Furthermore, it should have the capability to track time varying behavior rapidly because the time varying is an inherent characteristic of real world system. Existing methods, which are usually model-based, may not satisfy the aforementioned requirements. In this paper, we propose a model-free DPM technique based on Q-Learning. Q-DPM is much more efficient because it removes the overhead of parameter estimator and mode-switch controller. Furthermore, its policy optimization is performed via consecutive online trialing, which also leads to very rapid response to time varying behavior.

  19. Message Structures: a modelling technique for information systems analysis and design

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2011-01-01

    Despite the increasing maturity of model-driven software development (MDD), some research challenges remain open in the field of information systems (IS). For instance, there is a need to improve modelling techniques so that they cover several development stages in an integrated way, and they facilitate the transition from analysis to design. This paper presents Message Structures, a technique for the specification of communicative interactions between the IS and organisational actors. This technique can be used both in the analysis stage and in the design stage. During analysis, it allows abstracting from the technology that will support the IS, and to complement business process diagramming techniques with the specification of the communicational needs of the organisation. During design, Message Structures serves two purposes: (i) it allows to systematically derive a specification of the IS memory (e.g. a UML class diagram), (ii) and it allows to reason the user interface design using abstract patterns. Thi...

  20. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Shengmao Lin

    2017-01-01

    Full Text Available In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA is systematically investigated. The fluid-structure interaction (FSI model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD or computational solid stress (CSS model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  1. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques.

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong; Gu, Linxia

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  2. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications. PMID:28321413

  3. An effectiveness-NTU technique for characterising a finned tubes PCM system using a CFD model

    OpenAIRE

    Tay, N. H. Steven; Belusko, M.; Castell, Albert; Cabeza, Luisa F.; Bruno, F.

    2014-01-01

    Numerical modelling is commonly used to design, analyse and optimise tube-in-tank phase change thermal energy storage systems with fins. A new simplified two dimensional mathematical model, based on the effectiveness-number of transfer units technique, has been developed to characterise tube-in-tank phase change material systems, with radial round fins. The model applies an empirically derived P factor which defines the proportion of the heat flow which is parallel and isothermal....

  4. Some meta-modeling and optimization techniques for helicopter pre-sizing.

    OpenAIRE

    Tremolet, A.; Basset, P.M.

    2012-01-01

    Optimization and meta-models are key elements of modern engineering techniques. The Multidisciplinary Design Optimization (MDO) allows solving strongly coupled physical problems aiming at the global system optimization. For these multidisciplinary optimizations, meta-models can be required as surrogates for complex and high computational cost codes. Meta-modeling is also used for catching general trends and underlying relationships between parameters within a database. The application of thes...

  5. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Aithal, Sreeramana

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or a operational concept/idea or business system. Based on four constructs Advantages,...

  6. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  7. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    OpenAIRE

    Hussein Rappel; Aghil Yousefi-Koma; Jalil Jamali; Ako Bahari

    2014-01-01

    This paper presents a numerical model of lamb wave propagation in a homogenous steel plate using elastodynamic finite integration technique (EFIT) as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capab...

  8. AN ACCURACY ASSESSMENT OF AUTOMATED PHOTOGRAMMETRIC TECHNIQUES FOR 3D MODELING OF COMPLEX INTERIORS

    OpenAIRE

    Georgantas, A.; M. Brédif; Pierrot-Desseilligny, M.

    2012-01-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building’s stairway. APERO/MICMAC ( ©IGN )which is an Open Source photogrammetric softwar...

  9. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    Science.gov (United States)

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  10. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    Science.gov (United States)

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  11. Antenna pointing system for satellite tracking based on Kalman filtering and model predictive control techniques

    Science.gov (United States)

    Souza, André L. G.; Ishihara, João Y.; Ferreira, Henrique C.; Borges, Renato A.; Borges, Geovany A.

    2016-12-01

    The present work proposes a new approach for an antenna pointing system for satellite tracking. Such a system uses the received signal to estimate the beam pointing deviation and then adjusts the antenna pointing. The present work has two contributions. First, the estimation is performed by a Kalman filter based conical scan technique. This technique uses the Kalman filter avoiding the batch estimator and applies a mathematical manipulation avoiding the linearization approximations. Secondly, a control technique based on the model predictive control together with an explicit state feedback solution are obtained in order to reduce the computational burden. Numerical examples illustrate the results.

  12. A comparative assessment of efficient uncertainty analysis techniques for environmental fate and transport models: application to the FACT model

    Science.gov (United States)

    Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.

    2005-06-01

    This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.

  13. Double-wire sternal closure technique in bovine animal models for total artificial heart implant.

    Science.gov (United States)

    Karimov, Jamshid H; Sunagawa, Gengo; Golding, Leonard A R; Moazami, Nader; Fukamachi, Kiyotaka

    2015-08-01

    In vivo preclinical testing of mechanical circulatory devices requires large animal models that provide reliable physiological and hemodynamic conditions by which to test the device and investigate design and development strategies. Large bovine species are commonly used for mechanical circulatory support device research. The animals used for chronic in vivo support require high-quality care and excellent surgical techniques as well as advanced methods of postoperative care. These techniques are constantly being updated and new methods are emerging.We report results of our double steel-wire closure technique in large bovine models used for Cleveland Clinic's continuous-flow total artificial heart development program. This is the first report of double-wire sternal fixation used in large bovine models.

  14. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  15. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    into a linear model selection and estimation problem. To this end, we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, which is well known to time series econometricians, and the Marginal Bridge Estimator, which is better known to statisticians....... The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...

  16. Modelling of pulverized coal boilers: review and validation of on-line simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Diez, L.I.; Cortes, C.; Campo, A. [University of Zaragoza, Zaragoza (Spain). Centro de Investigacion de Recursos y Consumos Energeticos (CIRCE)

    2005-07-01

    Thermal modelling of large pulverized fuel utility boilers has reached a very remarkable development, through the application of CFD techniques and other advanced mathematical methods. However, due to the computational requirements, on-line monitoring and simulation tools still rely on lumped models and semiempirical approaches, which are often strongly simplified and not well connected with sound theoretical basis. This paper reviews on-line modelling techniques, aiming at the improvement of their capabilities, by means of the revision and modification of conventional lumped models and the integration of off-line CFD predictions. The paper illustrates the coherence of monitoring calculations as well as the validation of the on-line thermal simulator, starting from real operation data from a case-study unit. The outcome is that it is possible to significantly improve the accuracy of on-line calculations provided by conventional models, taking into account the singularities of large combustion systems and coupling offline CFD predictions for selected scenarios.

  17. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  18. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  19. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  20. Modelling the potential spatial distribution of mosquito species using three different techniques.

    Science.gov (United States)

    Cianci, Daniela; Hartemink, Nienke; Ibáñez-Justicia, Adolfo

    2015-02-27

    Models for the spatial distribution of vector species are important tools in the assessment of the risk of establishment and subsequent spread of vector-borne diseases. The aims of this study are to define the environmental conditions suitable for several mosquito species through species distribution modelling techniques, and to compare the results produced with the different techniques. Three different modelling techniques, i.e., non-linear discriminant analysis, random forest and generalised linear model, were used to investigate the environmental suitability in the Netherlands for three indigenous mosquito species (Culiseta annulata, Anopheles claviger and Ochlerotatus punctor). Results obtained with the three statistical models were compared with regard to: (i) environmental suitability maps, (ii) environmental variables associated with occurrence, (iii) model evaluation. The models indicated that precipitation, temperature and population density were associated with the occurrence of Cs. annulata and An. claviger, whereas land surface temperature and vegetation indices were associated with the presence of Oc. punctor. The maps produced with the three different modelling techniques showed consistent spatial patterns for each species, but differences in the ranges of the predictions. Non-linear discriminant analysis had lower predictions than other methods. The model with the best classification skills for all the species was the random forest model, with specificity values ranging from 0.89 to 0.91, and sensitivity values ranging from 0.64 to 0.95. We mapped the environmental suitability for three mosquito species with three different modelling techniques. For each species, the maps showed consistent spatial patterns, but the level of predicted environmental suitability differed; NLDA gave lower predicted probabilities of presence than the other two methods. The variables selected as important in the models were in agreement with the existing knowledge about

  1. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    Science.gov (United States)

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  2. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    problem. To this end we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians.The performance of these three model...... selectors is compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series of the G7 countries and the four Scandinavian ones, and focus on forecasting......In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a class...

  3. A Hybrid Model for the Mid-Long Term Runoff Forecasting by Evolutionary Computaion Techniques

    Institute of Scientific and Technical Information of China (English)

    Zou Xiu-fen; Kang Li-shan; Cae Hong-qing; Wu Zhi-jian

    2003-01-01

    The mid-long term hydrology forecasting is one of most challenging problems in hydrological studies. This paper proposes an efficient dynamical system prediction model using evolutionary computation techniques. The new model overcomes some disadvantages of conventional hydrology fore casting ones. The observed data is divided into two parts: the slow "smooth and steady" data, and the fast "coarse and fluctuation" data. Under the divide and conquer strategy, the behavior of smooth data is modeled by ordinary differential equations based on evolutionary modeling, and that of the coarse data is modeled using gray correlative forecasting method. Our model is verified on the test data of the mid-long term hydrology forecast in tbe northeast region of China. The experimental results show that the model is superior to gray system prediction model (GSPM).

  4. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  5. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  6. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  7. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  8. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  9. Machine learning techniques for astrophysical modelling and photometric redshift estimation of quasars in optical sky surveys

    CERN Document Server

    Kumar, N Daniel

    2008-01-01

    Machine learning techniques are utilised in several areas of astrophysical research today. This dissertation addresses the application of ML techniques to two classes of problems in astrophysics, namely, the analysis of individual astronomical phenomena over time and the automated, simultaneous analysis of thousands of objects in large optical sky surveys. Specifically investigated are (1) techniques to approximate the precise orbits of the satellites of Jupiter and Saturn given Earth-based observations as well as (2) techniques to quickly estimate the distances of quasars observed in the Sloan Digital Sky Survey. Learning methods considered include genetic algorithms, particle swarm optimisation, artificial neural networks, and radial basis function networks. The first part of this dissertation demonstrates that GAs and PSOs can both be efficiently used to model functions that are highly non-linear in several dimensions. It is subsequently demonstrated in the second part that ANNs and RBFNs can be used as ef...

  10. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  11. Comparative analysis of system identification techniques for nonlinear modeling of the neuron-microelectrode junction.

    Science.gov (United States)

    Khan, Saad Ahmad; Thakore, Vaibhav; Behal, Aman; Bölöni, Ladislau; Hickman, James J

    2013-03-01

    Applications of non-invasive neuroelectronic interfacing in the fields of whole-cell biosensing, biological computation and neural prosthetic devices depend critically on an efficient decoding and processing of information retrieved from a neuron-electrode junction. This necessitates development of mathematical models of the neuron-electrode interface that realistically represent the extracellular signals recorded at the neuroelectronic junction without being computationally expensive. Extracellular signals recorded using planar microelectrode or field effect transistor arrays have, until now, primarily been represented using linear equivalent circuit models that fail to reproduce the correct amplitude and shape of the signals recorded at the neuron-microelectrode interface. In this paper, to explore viable alternatives for a computationally inexpensive and efficient modeling of the neuron-electrode junction, input-output data from the neuron-electrode junction is modeled using a parametric Wiener model and a Nonlinear Auto-Regressive network with eXogenous input trained using a dynamic Neural Network model (NARX-NN model). Results corresponding to a validation dataset from these models are then employed to compare and contrast the computational complexity and efficiency of the aforementioned modeling techniques with the Lee-Schetzen technique of cross-correlation for estimating a nonlinear dynamic model of the neuroelectronic junction.

  12. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  13. STATISTICAL INFERENCES FOR VARYING-COEFFICINT MODELS BASED ON LOCALLY WEIGHTED REGRESSION TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    梅长林; 张文修; 梁怡

    2001-01-01

    Some fundamental issues on statistical inferences relating to varying-coefficient regression models are addressed and studied. An exact testing procedure is proposed for checking the goodness of fit of a varying-coefficient model fired by the locally weighted regression technique versus an ordinary linear regression model. Also, an appropriate statistic for testing variation of model parameters over the locations where the observations are collected is constructed and a formal testing approach which is essential to exploring spatial non-stationarity in geography science is suggested.

  14. Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs

    Science.gov (United States)

    Karimi-Fard, M.; Lapene, A.; Pauget, L.

    2012-12-01

    During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible

  15. Animal models in bariatric surgery--a review of the surgical techniques and postsurgical physiology.

    Science.gov (United States)

    Rao, Raghavendra S; Rao, Venkatesh; Kini, Subhash

    2010-09-01

    Bariatric surgery is considered the most effective current treatment for morbid obesity. Since the first publication of an article by Kremen, Linner, and Nelson, many experiments have been performed using animal models. The initial experiments used only malabsorptive procedures like intestinal bypass which have largely been abandoned now. These experimental models have been used to assess feasibility and safety as well as to refine techniques particular to each procedure. We will discuss the surgical techniques and the postsurgical physiology of the four major current bariatric procedures (namely, Roux-en-Y gastric bypass, gastric banding, sleeve gastrectomy, and biliopancreatic diversion). We have also reviewed the anatomy and physiology of animal models. We have reviewed the literature and presented it such that it would be a reference to an investigator interested in animal experiments in bariatric surgery. Experimental animal models are further divided into two categories: large mammals that include dogs, cats, rabbits, and pig and small mammals that include rats and mice.

  16. A technique using a nonlinear helicopter model for determining trims and derivatives

    Science.gov (United States)

    Ostroff, A. J.; Downing, D. R.; Rood, W. J.

    1976-01-01

    A technique is described for determining the trims and quasi-static derivatives of a flight vehicle for use in a linear perturbation model; both the coupled and uncoupled forms of the linear perturbation model are included. Since this technique requires a nonlinear vehicle model, detailed equations with constants and nonlinear functions for the CH-47B tandem rotor helicopter are presented. Tables of trims and derivatives are included for airspeeds between -40 and 160 knots and rates of descent between + or - 10.16 m/sec (+ or - 200 ft/min). As a verification, the calculated and referenced values of comparable trims, derivatives, and linear model poles are shown to have acceptable agreement.

  17. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  18. Investigation of the Stability of POD-Galerkin Techniques for Reduced Order Model Development

    Science.gov (United States)

    2016-01-09

    CFD solutions comparison of Case A at x/L = 0.5 for cases in Table 4. 12 The remaining three cases with multiple frequencies in the forcing function...Techniques for Reduced Order Model Development 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Huang, C...mitigate the stability issues encountered in developing a reduced order model (ROM) for combustion response to specified excitations using the Euler

  19. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  20. Spatio–temporal rain attenuation model for application to fade mitigation techniques

    OpenAIRE

    2004-01-01

    We present a new stochastic-dynamic model useful for the planning and design of gigahertz satellite communications using fade mitigation techniques. It is a generalization of the Maseng–Bakken and targets dual-site dual-frequency rain attenu- ated satellite links. The outcome is a consistent and comprehensive model capable of yielding theoretical descriptions of: 1) long-term power spectral density of rain attenuation; 2) rain fade slope; 3) rain frequency scaling factor; 4) site diversity; a...

  1. Financial-Economic Time Series Modeling and Prediction Techniques – Review

    OpenAIRE

    2014-01-01

    Financial-economic time series distinguishes from other time series because they contain a portion of uncertainity. Because of this, statistical theory and methods play important role in their analysis. Moreover, external influence of various parameters on the values in time series makes them non-linear, which on the other hand suggests employment of more complex techniques for ther modeling. To cope with this challenging problem many researchers and scientists have developed various models a...

  2. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.

    Science.gov (United States)

    Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana

    2012-05-15

    Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability

  3. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    Science.gov (United States)

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  4. Technique for Early Reliability Prediction of Software Components Using Behaviour Models.

    Science.gov (United States)

    Ali, Awad; N A Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction.

  5. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The app

  6. Modeling and teaching techniques for conceptual and logical relational database design.

    Science.gov (United States)

    Thompson, Cheryl Bagley; Sward, Katherine

    2005-10-01

    This paper proposes a series of techniques to be used in teaching database design. Common ERD notations are discussed. The authors developed an ERD notation, adapted from the Unified Modeling Language, which facilitates student learning of the database design process. The paper presents a specific step by step process for representing the ERD components as tables and for normalizing the resulting set of tables.

  7. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    2003-01-01

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find th

  8. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The app

  9. Combined Rock-physical Modelling and Seismic Inversion Techniques for Characterisation of the Posidonia Shale Formation

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, M.; Diephuis, G.

    2015-01-01

    The objective of this study is to characterise the Jurassic Posidonia Shale Formation at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The results show that the Posidonia Shale Formatio

  10. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2017-08-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  11. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  12. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  13. Surgical technique: establishing a pre-clinical large animal model to test aortic valve leaflet substitute

    Science.gov (United States)

    Knirsch, Walter; Cesarovic, Niko; Krüger, Bernard; Schmiady, Martin; Frauenfelder, Thomas; Frese, Laura; Dave, Hitendu; Hoerstrup, Simon Philipp; Hübler, Michael

    2016-01-01

    To overcome current limitations of valve substitutes and tissue substitutes the technology of tissue engineering (TE) continues to offer new perspectives in congenital cardiac surgery. We report our experiences and results implanting a decellularized TE patch in nine sheep in orthotropic position as aortic valve leaflet substitute. Establishing the animal model, feasibility, cardiopulmonary bypass issues and operative technique are highlighted. PMID:28149571

  14. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

  15. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  16. Top-level modeling of an als system utilizing object-oriented techniques

    Science.gov (United States)

    Rodriguez, L. F.; Kang, S.; Ting, K. C.

    The possible configuration of an Advanced Life Support (ALS) System capable of supporting human life for long-term space missions continues to evolve as researchers investigate potential technologies and configurations. To facilitate the decision process the development of acceptable, flexible, and dynamic mathematical computer modeling tools capable of system level analysis is desirable. Object-oriented techniques have been adopted to develop a dynamic top-level model of an ALS system.This approach has several advantages; among these, object-oriented abstractions of systems are inherently modular in architecture. Thus, models can initially be somewhat simplistic, while allowing for adjustments and improvements. In addition, by coding the model in Java, the model can be implemented via the World Wide Web, greatly encouraging the utilization of the model. Systems analysis is further enabled with the utilization of a readily available backend database containing information supporting the model. The subsystem models of the ALS system model include Crew, Biomass Production, Waste Processing and Resource Recovery, Food Processing and Nutrition, and the Interconnecting Space. Each subsystem model and an overall model have been developed. Presented here is the procedure utilized to develop the modeling tool, the vision of the modeling tool, and the current focus for each of the subsystem models.

  17. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    Energy Technology Data Exchange (ETDEWEB)

    Barus, R. P. P., E-mail: rismawan.ppb@gmail.com [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung and Centre for Material and Technical Product, Jalan Sangkuriang No. 14 Bandung (Indonesia); Tjokronegoro, H. A.; Leksono, E. [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia); Ismunandar [Chemistry Study, Faculty of Mathematics and Science, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia)

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  18. Meso-damage modelling of polymer based particulate composites using finite element technique

    Science.gov (United States)

    Tsui, Chi Pong

    To develop a new particulate polymer composite (PPC) with desired mechanical properties is usually accomplished by an experimental trial-and-error approach. A new technique, which predicts the damage mechanism and its effects on the mechanical properties of PPC, has been proposed. This meso-mechanical modelling technique, which offers a means to bridge the micro-damage mechanism and the macro-structural behaviour, has been implemented in a finite element code. A three-dimensional finite element meso-cell model has been designed and constructed to simulate the damage mechanism of PPC. The meso-cell model consists of a micro-particle, an interface, and a matrix. The initiation of the particle/polymer matrix debonding process has been predicted on the basis of a tensile criterion. By considering the meso-cell model as a representative volume element (RVE), the effects of damage on the macro-structural constitutive behaviour of PPC have been determined. An experimental investigation has been made on glass beads (GB) reinforced polyphenylene oxides (PPO) for verification of the meso-cell model and the meso-mechanical finite element technique. The predicted constitutive relation has been found to be in good agreement with the experimental results. The results of the in-situ microscopic test also verify the correctness of the meso-cell model. The application of the meso-mechanical finite element modelling technique has been extended to a macro-structural analysis to simulate the response an engineering structure made of PPC under a static load. In the simulation, a damage variable has been defined in terms of the computational results of the cell model in meso-scale. Hence, the damage-coupled constitutive relation of the GB/PPO composite could be derived. A user-defined subroutine VUMAT in FORTRAN language describing the damage-coupled constitutive behaviour has then been incorporated into the ABAQUS finite element code. On a macro-scale, the ABAQUS finite element code

  19. Modeling and Control of a Photovoltaic Energy System Using the State-Space Averaging Technique

    Directory of Open Access Journals (Sweden)

    Mohd S. Jamri

    2010-01-01

    Full Text Available Problem statement: This study presented the modeling and control of a stand-alone Photovoltaic (PV system using the state-space averaging technique. Approach: The PV module was modeled based on the parameters obtained from a commercial PV data sheet while state-space method is used to model the power converter. A DC-DC boost converter was chosen to step up the input DC voltage of the PV module while the DC-AC single-phase full-bridge square-wave inverter was chosen to convert the input DC comes from boost converter into AC element. The integrated state-space model was simulated under a constant and a variable change of solar irradiance and temperature. In addition to that, maximum power point tracking method was also included in the model to ensure that optimum use of PV module is made. A circuitry simulation was performed under the similar test conditions in order to validate the state-space model. Results: Results showed that the state-space averaging model yields the similar performance as produced by the circuitry simulation in terms of the voltage, current and power generated. Conclusion/Recommendations: The state-space averaging technique is simple to be implemented in modeling and control of either simple or complex system, which yields the similar performance as the results from circuitry method.

  20. Studies on Pumice Lightweight Aggregate Concrete with Quarry Dust Using Mathematical Modeling Aid of ACO Techniques

    Directory of Open Access Journals (Sweden)

    J. Rex

    2016-01-01

    Full Text Available The lightweight aggregate is an aggregate that weighs less than the usual rock aggregate and the quarry dust is a rock particle used in the concrete for the experimentation. The significant intention of the proposed technique is to frame a mathematical modeling with the aid of the optimization techniques. The mathematical modeling is done by minimizing the cost and time consumed in the case of extension of the real time experiment. The proposed mathematical modeling is utilized to predict four output parameters such as compressive strength (Mpa, split tensile strength (Mpa, flexural strength (Mpa, and deflection (in mm. Here, the modeling is carried out with three different optimization techniques like genetic algorithm (GA, particle swarm optimization (PSO, and ant colony optimization (ACO with 80% of data from experiment utilized for the training and the remaining 20% for the validation. Finally, while testing, the error value is minimized and the performance obtained in the ACO for the parameters such as compressive strength, split tensile strength, flexural strength, and deflection is 91%, 98%, 87%, and 94% of predicted values, respectively, in the mathematical modeling.

  1. The feasibility of computational modelling technique to detect the bladder cancer.

    Science.gov (United States)

    Keshtkar, Ahmad; Mesbahi, Asghar; Rasta, S H; Keshtkar, Asghar

    2010-01-01

    A numerical technique, finite element analysis (FEA) was used to model the electrical properties, the bio impedance of the bladder tissue in order to predict the bladder cancer. This model results showed that the normal bladder tissue have significantly higher impedance than the malignant tissue that was in opposite with the impedance measurements or the experimental results. Therefore, this difference can be explained using the effects of inflammation, oedema on the urothelium and the property of the bladder as a distensible organ. Furthermore, the different current distributions inside the bladder tissue (in histological layers) in normal and malignant cases and finally different applied pressures over the bladder tissue can cause different impedances for the bladder tissue. Finally, it is believed that further studies have to be carried out to characterise the human bladder tissue using the electrical impedance measurement and modelling techniques.

  2. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  3. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  4. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...

  5. Ex Vivo Aneurysm models mimicking real cases for the preoperative training of the clipping technique

    Directory of Open Access Journals (Sweden)

    Martin D.

    2016-06-01

    Full Text Available Training in a specialty like cerebrovascular neurosurgery becomes more and more difficult as the access to training is limited by the increasing number of neurosurgical departments and the lack of expert centers for specific pathology. This is why an increased investment in experimental training is encountered in many centers worldwide. The best models for training the clipping technique are ex Vivo on cadaveric heads aneurysm models, animal models or augmented reality models. We present a few ex Vivo models of aneurysms mimicking ACOA, ACM bifurcation and basil are tip aneurysms using a pulsed continuous perfusion system. Clipping training on aneurysm models is an invaluable tool both for the residents and for the specialists with a special interest in cerebrovascular surgery.

  6. Velocity Modeling and Inversion Techniques for Locating Microseismic Events in Unconventional Reservoirs

    Institute of Scientific and Technical Information of China (English)

    Jianzhong Zhang; Han Liu; Zhihui Zou; Zhonglai Huang

    2015-01-01

    A velocity model is an important factor influencing microseismic event locations. We re-view the velocity modeling and inversion techniques for locating microseismic events in exploration for unconventional oil and gas reservoirs. We first describe the geological and geophysical characteristics of reservoir formations related to hydraulic fracturing in heterogeneity, anisotropy, and variability, then discuss the influences of velocity estimation, anisotropy model, and their time-lapse changes on the accuracy in determining microseismic event locations, and then survey some typical methods for build-ing velocity models in locating event locations. We conclude that the three tangled physical attributes of reservoirs make microseismic monitoring very challenging. The uncertainties in velocity model and ig-noring its anisotropies and its variations in hydraulic fracturing can cause systematic mislocations of microseismic events which are unacceptable in microseismic monitoring. So, we propose some potential ways for building accurate velocity models.

  7. Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques

    Directory of Open Access Journals (Sweden)

    Linping Xiong

    2012-01-01

    Full Text Available China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China’s medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  8. Constructing an urban population model for medical insurance scheme using microsimulation techniques.

    Science.gov (United States)

    Xiong, Linping; Zhang, Lulu; Tang, Weidong; Ma, Yuqin

    2012-01-01

    China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China's medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  9. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    Science.gov (United States)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  10. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Interaction

    Science.gov (United States)

    DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  11. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    Science.gov (United States)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  12. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  13. Updating prediction models by dynamical relaxation - An examination of the technique. [for numerical weather forecasting

    Science.gov (United States)

    Davies, H. C.; Turner, R. E.

    1977-01-01

    A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.

  14. Modeling and simulation of atmosphere interference signal based on FTIR spectroscopy technique

    Science.gov (United States)

    Zhang, Yugui; Li, Qiang; Yu, Zhengyang; Liu, Zhengmin

    2016-09-01

    Fourier Transform Infrared spectroscopy technique, featured with large frequency range and high spectral resolution, is becoming the research focus in spectrum analysis area, and is spreading in atmosphere detection applications in the aerospace field. In this paper, based on FTIR spectroscopy technique, the principle of atmosphere interference signal generation is deduced in theory, and also its mathematical model and simulation are carried out. Finally, the intrinsic characteristics of the interference signal in time domain and frequency domain, which give a theoretical foundation to the performance parameter design of electrical signal processing, are analyzed.

  15. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...

  16. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  17. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  18. Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).

    Science.gov (United States)

    Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D

    2010-07-01

    Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.

  19. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    Science.gov (United States)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  20. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  1. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    Science.gov (United States)

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  2. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some...... previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally...... on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model...

  3. Limitations in paleomagnetic data and modelling techniques and their impact on Holocene geomagnetic field models

    DEFF Research Database (Denmark)

    Panovska, S.; Korte, M.; Finlay, Chris;

    2015-01-01

    Characterization of geomagnetic field behaviour on timescales of centuries to millennia is necessary to understand the mechanisms that sustain the geodynamo and drive its evolution. As Holocene paleomagnetic and archeomagnetic data have become more abundant, strategies for regularized inversion...... of modern field data have been adapted to produce numerous timevarying global field models. We evaluate the effectiveness of several approaches to inversion and data handling, by assessing both global and regional properties of the resulting models. Global Holocene field models cannot resolve Southern...... hemisphere regional field variations without the use of sediments. A standard data set is used to construct multiple models using two different strategies for relative paleointensity calibration and declination orientation and a selection of starting models in the inversion procedure. When data uncertainties...

  4. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  5. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft

  6. Using Data Mining Techniques to Build a Classification Model for Predicting Employees Performance

    Directory of Open Access Journals (Sweden)

    Qasem A. Al-Radaideh

    2012-02-01

    Full Text Available Human capital is of a high concern for companies’ management where their most interest is in hiring the highly qualified personnel which are expected to perform highly as well. Recently, there has been a growing interest in the data mining area, where the objective is the discovery of knowledge that is correct and of high benefit for users. In this paper, data mining techniques were utilized to build a classification model to predict the performance of employees. To build the classification model the CRISP-DM data mining methodology was adopted. Decision tree was the main data mining tool used to build the classification model, where several classification rules were generated. To validate the generated model, several experiments were conducted using real data collected from several companies. The model is intended to be used for predicting new applicants’ performance.

  7. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  8. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  9. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

    Science.gov (United States)

    Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

    2015-01-01

    Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

  10. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  11. Hybrid Model Testing Technique for Deep-Sea Platforms Based on Equivalent Water Depth Truncation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, an inner turret moored FPSO which works in the water of 320 m depth, is selected to study the so-called "passively-truncated + numerical-simulation" type of hybrid model testing technique while the truncated water depth is 160 m and the model scale λ=80. During the investigation, the optimization design of the equivalent-depth truncated system is performed by using the similarity of the static characteristics between the truncated system and the full depth one as the objective function. According to the truncated system, the corresponding physical test model is made. By adopting the coupling time domain simulation method, the truncated system model test is numerically reconstructed to carefully verify the computer simulation software and to adjust the corresponding hydrodynamic parameters. Based on the above work, the numerical extrapolation to the full depth system is performed by using the verified computer software and the adjusted hydrodynamic parameters. The full depth system model test is then performed in the basin and the results are compared with those from the numerical extrapolation. At last, the implementation procedure and the key technique of the hybrid model testing of the deep-sea platforms are summarized and printed. Through the above investigations, some beneficial conclusions are presented.

  12. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    Science.gov (United States)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

  13. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Ned; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  14. Transgastric endoscopic gastrojejunostomy using holing followed by interrupted suture technique in a porcine model

    Institute of Scientific and Technical Information of China (English)

    Su-Yu; Chen; Hong; Shi; Sheng-Jun; Jiang; Yong-Guang; Wang; Kai; Lin; Zhao-Fei; Xie; Xiao-Jing; Liu

    2015-01-01

    AIM: To demonstrate the feasibility and reproducibility of a pure natural orifice transluminal endoscopic surgery(NOTES) gastrojejunostomy using holing followed by interrupted suture technique using a single endoloop matched with a pair of clips in a non-survival porcine model.METHODS: NOTES gastrojejunostomy was performed on three female domestic pigs as follows: Gastrostomy, selection and retrieval of a free-floating loop of the small bowel into the stomach pouch, hold and exposure of the loop in the gastric cavity using a submucosal inflation technique, execution of a gastro-jejunal mucosal-seromuscular layer approximation using holing followed by interrupted suture technique with endoloop/clips, and full-thickness incision of the loop with a Dual knife.RESULTS: Pure NOTES side-to-side gastrojejunostomy was successfully performed in all three animals. No leakage was identified via methylene blue evaluation following surgery.CONCLUSION: This novel technique for preforming a gastrointestinal anastomosis exclusively by NOTES is technically feasible and reproducible in an animal model but warrants further improvement.

  15. Soil temperature modeling at different depths using neuro-fuzzy, neural network, and genetic programming techniques

    Science.gov (United States)

    Kisi, Ozgur; Sanikhani, Hadi; Cobaner, Murat

    2016-05-01

    The applicability of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and genetic programming (GP) techniques in estimating soil temperatures (ST) at different depths is investigated in this study. Weather data from two stations, Mersin and Adana, Turkey, were used as inputs to the applied models in order to model monthly STs. The first part of the study focused on comparison of ANN, ANFIS, and GP models in modeling ST of two stations at the depths of 10, 50, and 100 cm. GP was found to perform better than the ANN and ANFIS-SC in estimating monthly ST. The effect of periodicity (month of the year) on models' accuracy was also investigated. Including periodicity component in models' inputs considerably increased their accuracies. The root mean square error (RMSE) of ANN models was respectively decreased by 34 and 27 % for the depths of 10 and 100 cm adding the periodicity input. In the second part of the study, the accuracies of the ANN, ANFIS, and GP models were compared in estimating ST of Mersin Station using the climatic data of Adana Station. The ANN models generally performed better than the ANFIS-SC and GP in modeling ST of Mersin Station without local climatic inputs.

  16. Soil temperature modeling at different depths using neuro-fuzzy, neural network, and genetic programming techniques

    Science.gov (United States)

    Kisi, Ozgur; Sanikhani, Hadi; Cobaner, Murat

    2017-08-01

    The applicability of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and genetic programming (GP) techniques in estimating soil temperatures (ST) at different depths is investigated in this study. Weather data from two stations, Mersin and Adana, Turkey, were used as inputs to the applied models in order to model monthly STs. The first part of the study focused on comparison of ANN, ANFIS, and GP models in modeling ST of two stations at the depths of 10, 50, and 100 cm. GP was found to perform better than the ANN and ANFIS-SC in estimating monthly ST. The effect of periodicity (month of the year) on models' accuracy was also investigated. Including periodicity component in models' inputs considerably increased their accuracies. The root mean square error (RMSE) of ANN models was respectively decreased by 34 and 27 % for the depths of 10 and 100 cm adding the periodicity input. In the second part of the study, the accuracies of the ANN, ANFIS, and GP models were compared in estimating ST of Mersin Station using the climatic data of Adana Station. The ANN models generally performed better than the ANFIS-SC and GP in modeling ST of Mersin Station without local climatic inputs.

  17. Policy Capturing with Local Models: The Application of the AID technique in Modeling Judgment

    Science.gov (United States)

    1972-12-01

    or coding phases have upon the derived policy modelo . Particularly important aspects of these subtasks include: 1) Initial identification and coding of...Applying AID4UT/AIDTRE in Policy Capturing: The experience gained thus far in applying AID4UT/AIDTREl to Policy Capturing is extensive in the sense ...that numerous models have been attempted and produced, but limited in the sense that these models were all for a particular decision process, except for

  18. Local tetrahedron modeling of microelectronics using the finite-volume hybrid-grid technique

    Energy Technology Data Exchange (ETDEWEB)

    Riley, D.J.; Turner, C.D.

    1995-12-01

    The finite-volume hybrid-grid (FVHG) technique uses both structured and unstructured grid regions in obtaining a solution to the time-domain Maxwell`s equations. The method is based on explicit time differencing and utilizes rectilinear finite-difference time-domain (FDTD) and nonorthogonal finite-volume time-domain (FVTD). The technique directly couples structured FDTD grids with unstructured FVTD grids without the need for spatial interpolation across grid interfaces. In this paper, the FVHG method is applied to simple planar microelectronic devices. Local tetrahedron grids are used to model portions of the device under study, with the remainder of the problem space being modeled with cubical hexahedral cells. The accuracy of propagating microstrip-guided waves from a low-density hexahedron region through a high-density tetrahedron grid is investigated.

  19. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  20. Test of the notch technique for determining the radial sensitivity of the optical model potential

    CERN Document Server

    Yang, Lei; Jia, Hui-ming; Xu, Xin-Xing; Ma, Nan-Ru; Sun, Li-Jie; Yang, Feng; Zhang, Huan-Qiao; Li, Zu-Hua; Wang, Dong-Xi

    2015-01-01

    Detailed investigations on the notch technique are performed on the ideal data generated by the optical model potential parameters extracted from the 16O+208Pb system at the laboratory energy of 129.5 MeV, to study the sensitivities of this technique on the model parameters as well as the experimental data. It is found that, for the perturbation parameters, a sufficient large reduced fraction and an appropriate small perturbation width are necessary to determine the accurate radial sensitivity; while for the potential parameters, almost no dependence was observed. For the experimental measurements, the number of data points has little influence for the heavy target system, and the relative inner information of the nuclear potential can be derived when the measurement extended to a lower cross section.

  1. An Accuracy Assessment of Automated Photogrammetric Techniques for 3d Modeling of Complex Interiors

    Science.gov (United States)

    Georgantas, A.; Brédif, M.; Pierrot-Desseilligny, M.

    2012-07-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building's stairway. APERO/MICMAC ( ©IGN )which is an Open Source photogrammetric software was used for the production of the 3D photogrammetric point cloud which was compared to the one acquired by a Leica Scanstation 2 laser scanner. After performing various qualitative and quantitative controls we present the advantages and disadvantages of each 3D modelling method applied in a complex interior of a modern building.

  2. 21st Century hydrological modeling for optimizing ancient water harvesting techniques

    OpenAIRE

    Cornelis, Wim; Verbist, Koen; McLaren, Robert G; Soto, Guido; Gabriels, Donald

    2012-01-01

    In order to increase dryland productivity, water harvesting techniques (WHT) have received renewed attention, leading to their massive implementation in marginal drylands. However, versatile tools to evaluate their efficiency under a wide range of conditions are often lacking. For two case studies in the arid and semi-arid central-northern zone of Chile, a fully coupled 3D surface-subsurface hydrological model based on the Richards’ and the Saint Venant equations was used to evaluate and impr...

  3. Comparison of different modelling techniques for longitudinally invariant integrated optical waveguides

    Science.gov (United States)

    de Zutter, D.; Lagasse, P.; Buus, J.; Young, T. P.; Dillon, B. M.

    1989-10-01

    In order to compare various modeling techniques for the eigenmode analysis of integrated optical waveguides, twelve different methods are applied to the analysis of two typical III-V rib waveguides. Both a single and a coupled waveguide case are considered. Results focus on the effective refractive index value for the lowest order TE-mode in the case of the single waveguide, and on the coupling length between the lowest order symmetric and antisymmetric TE-modes of the coupled waveguides.

  4. Application of power addition as modelling technique for flow processes: Two case studies

    CSIR Research Space (South Africa)

    de Wet, P

    2010-05-01

    Full Text Available addition as modelling technique for flow processes: Two case studies Pierre de Wet a,�, J. Prieur du Plessis b, Sonia Woudberg b a Council for Scientific & Industrial Research (CSIR), PO Box 320, Stellenbosch 7599, South Africa b Applied Mathematics... research on precise, credible experimental practices is undeniable. The empirical equations derived from these investigations impart understanding of the underlying physics are crucial for the development of computational routines and form an integral...

  5. Transversality of mathematical modelling techniques in Pharmacy by means of a spreadsheet

    OpenAIRE

    Oca??a Lara, Francisco A.; Valderrama, M.J.; Aguilera, A.M.; Matilla-Hern??ndez, A.; Talavera Rodr??guez, Eva Mar??a

    2010-01-01

    The implementation of the EHEA in Pharmacy studies will lead to foster student self-learning. This way the student abilities to apply knowledge, learned in some subjects, to search knowledge in any other will be tested. Among such applicative knowledge, we can consider the mathematical modelling techniques, included in the field of Mathematics. This article draws attention to the usefulness of popular spreadsheets for Pharmacy students interested in the application of mathematical...

  6. Configuring simulation models using CAD techniques : a new approach to warehouse design

    OpenAIRE

    Brito, António Ernesto da Silva Carvalho

    2013-01-01

    The research reported in this thesis is related to the development and use of software tools for supporting warehouse design and management. Computer Aided Design and Simulation techniques are used to develop a software system that forms the basis of a Decision Support System for warehouse design. The current position of simulation software is reviewed. It is investigated how appropriate current simulation software is for warehouse modelling. Special attention is given to Vi...

  7. Characterization and modeling of electrochemical energy conversion systems by impedance techniques

    Energy Technology Data Exchange (ETDEWEB)

    Klotz, Dino

    2012-07-01

    This work introduces (i) amendments to basic electrochemical measurement techniques in the time and frequency domain suitable for electrochemical energy conversion systems like fuel cells and batteries, which enable shorter measurement times and improved precision in both measurement and parameter identification, and (ii) a modeling approach that is able to simulate a technically relevant system just by information gained through static and impedance measurements of laboratory size cells.

  8. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-06-01

    Full Text Available In arid and semi-arid zones runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in Northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. Tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between observed soil water content and simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. Model results indicate that the infiltration trench has a significant effect on

  9. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-10-01

    Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a

  10. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Science.gov (United States)

    Verbist, K.; Cornelis, W. M.; Gabriels, D.; Alaerts, K.; Soto, G.

    2009-10-01

    In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat) independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a significant effect on soil-water storage, especially at the base of the

  11. Optimization of liquid overlay technique to formulate heterogenic 3D co-cultures models.

    Science.gov (United States)

    Costa, Elisabete C; Gaspar, Vítor M; Coutinho, Paula; Correia, Ilídio J

    2014-08-01

    Three-dimensional (3D) cell culture models of solid tumors are currently having a tremendous impact in the in vitro screening of candidate anti-tumoral therapies. These 3D models provide more reliable results than those provided by standard 2D in vitro cell cultures. However, 3D manufacturing techniques need to be further optimized in order to increase the robustness of these models and provide data that can be properly correlated with the in vivo situation. Therefore, in the present study the parameters used for producing multicellular tumor spheroids (MCTS) by liquid overlay technique (LOT) were optimized in order to produce heterogeneous cellular agglomerates comprised of cancer cells and stromal cells, during long periods. Spheroids were produced under highly controlled conditions, namely: (i) agarose coatings; (ii) horizontal stirring, and (iii) a known initial cell number. The simultaneous optimization of these parameters promoted the assembly of 3D characteristic cellular organization similar to that found in the in vivo solid tumors. Such improvements in the LOT technique promoted the assembly of highly reproducible, individual 3D spheroids, with a low cost of production and that can be used for future in vitro drug screening assays.

  12. Use of System Identification Techniques to Explore the Hydrological Cycle Response to Perturbations in Climate Models

    Science.gov (United States)

    Kravitz, B.; MacMartin, D. G.; Rasch, P. J.; Wang, H.

    2015-12-01

    Identifying the influence of radiative forcing on hydrological cycle changes in climate models can be challenging due to low signal-to-noise ratios, particularly for regional changes. One method of improving the signal-to-noise ratio, even for short simulations, is to use techniques from engineering, broadly known as system identification. Through this method, forcing (or any other chosen field) in multiple regions in a climate model is perturbed simultaneously by using mutually uncorrelated signals with a chosen frequency content, depending upon the climate behavior one wishes to reveal. The result is the sensitivity of a particular climate field (e.g., temperature, precipitation, or cloud cover) to changes in any perturbed region. We demonstrate this technique in the Community Earth System Model (CESM). We perturbed surface air temperatures in 22 regions by up to 1°C. The amount of temperature perturbation was changed every day corresponding to a predetermined sequence of random numbers between -1 and 1, filtered to contain particular frequency content. The matrix of sequences was then orthogonalized such that all individual sequences were mutually uncorrelated. We performed CESM simulations with both fixed sea surface temperatures and a fully coupled ocean. We discuss the various patterns of climate response in several fields relevant to the hydrological cycle, including precipitation and surface latent heat fluxes. We also discuss the potential limits of this technique in terms of the spatial and temporal scales over which it would be appropriate to use.

  13. Downscaling Statistical Model Techniques for Climate Change Analysis Applied to the Amazon Region

    Directory of Open Access Journals (Sweden)

    David Mendes

    2014-01-01

    Full Text Available The Amazon is an area covered predominantly by dense tropical rainforest with relatively small inclusions of several other types of vegetation. In the last decades, scientific research has suggested a strong link between the health of the Amazon and the integrity of the global climate: tropical forests and woodlands (e.g., savannas exchange vast amounts of water and energy with the atmosphere and are thought to be important in controlling local and regional climates. Consider the importance of the Amazon biome to the global climate changes impacts and the role of the protected area in the conservation of biodiversity and state-of-art of downscaling model techniques based on ANN Calibrate and run a downscaling model technique based on the Artificial Neural Network (ANN that is applied to the Amazon region in order to obtain regional and local climate predicted data (e.g., precipitation. Considering the importance of the Amazon biome to the global climate changes impacts and the state-of-art of downscaling techniques for climate models, the shower of this work is presented as follows: the use of ANNs good similarity with the observation in the cities of Belém and Manaus, with correlations of approximately 88.9% and 91.3%, respectively, and spatial distribution, especially in the correction process, representing a good fit.

  14. Optimization models and techniques for implementation and pricing of electricity markets

    Science.gov (United States)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  15. A review of discrete modeling techniques for fracturing processes in discontinuous rock masses

    Institute of Scientific and Technical Information of China (English)

    A.Lisjak; G.Grasselli

    2014-01-01

    The goal of this review paper is to provide a summary of selected discrete element and hybrid finitee discrete element modeling techniques that have emerged in the field of rock mechanics as simulation tools for fracturing processes in rocks and rock masses. The fundamental principles of each computer code are illustrated with particular emphasis on the approach specifically adopted to simulate fracture nucleation and propagation and to account for the presence of rock mass discontinuities. This description is accom-panied by a brief review of application studies focusing on laboratory-scale models of rock failure processes and on the simulation of damage development around underground excavations.

  16. Improvements of Surgical Technique in Establishment of Rat Orthotopic Pulmonary Transplantation Model Using Cuffs

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to establish more simple and effective rat orthotopic lung transplantation models, 20 rats were divided into donor and recipient groups. Rat lung transplantation models were established by using improved cuff technique. All the 10 operations were accomplished successfully.The mean operative time of recipients was 45±4 min. The survival time was over 30 days after lung transplantation. The checks of X-ray were almost ncrmal. There was no significant difference in the blood gas analysis before and after clipping the right hilum (P>. 05). This method is more simple,applicable and requires less time.

  17. A Novel Algorithmic Cost Estimation Model Based on Soft Computing Technique

    Directory of Open Access Journals (Sweden)

    Iman Attarzadeh

    2010-01-01

    Full Text Available Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM, COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing

  18. Designing a mathematical model of management techniques (TQM, BPR in Zahedan , weave fishing net industries

    Directory of Open Access Journals (Sweden)

    Dr.Baqer Kord, Dr. Habibollah Salarzehi, Hamed Aramesh, Somayeh Mousavi

    2010-10-01

    Full Text Available Currently reengineering and Total Quality Management (TQM techniques known improvement in organizations. in this research initially a Mathematical model was designed to find out the main factors in relative reengineering and TQM. Based on finding factors of Model, a 40 element questionary formed and the questionnaire distributed among the staff of fishing net factory in a random order, finding by using of SPSS, the data analyzed and concluding Remark shows the installation and acceptance of TQM by using reengineering is possible to the factory.

  19. Model-driven engineering of information systems principles, techniques, and practice

    CERN Document Server

    Cretu, Liviu Gabriel

    2015-01-01

    Model-driven engineering (MDE) is the automatic production of software from simplified models of structure and functionality. It mainly involves the automation of the routine and technologically complex programming tasks, thus allowing developers to focus on the true value-adding functionality that the system needs to deliver. This book serves an overview of some of the core topics in MDE. The volume is broken into two sections offering a selection of papers that helps the reader not only understand the MDE principles and techniques, but also learn from practical examples. Also covered are the

  20. A review of discrete modeling techniques for fracturing processes in discontinuous rock masses

    Directory of Open Access Journals (Sweden)

    A. Lisjak

    2014-08-01

    Full Text Available The goal of this review paper is to provide a summary of selected discrete element and hybrid finite–discrete element modeling techniques that have emerged in the field of rock mechanics as simulation tools for fracturing processes in rocks and rock masses. The fundamental principles of each computer code are illustrated with particular emphasis on the approach specifically adopted to simulate fracture nucleation and propagation and to account for the presence of rock mass discontinuities. This description is accompanied by a brief review of application studies focusing on laboratory-scale models of rock failure processes and on the simulation of damage development around underground excavations.

  1. Scale-model charge-transfer technique for measuring enhancement factors

    Science.gov (United States)

    Kositsky, J.; Nanevicz, J. E.

    1991-01-01

    Determination of aircraft electric field enhancement factors is crucial when using airborne field mill (ABFM) systems to accurately measure electric fields aloft. SRI used the scale model charge transfer technique to determine enhancement factors of several canonical shapes and a scale model Learjet 36A. The measured values for the canonical shapes agreed with known analytic solutions within about 6 percent. The laboratory determined enhancement factors for the aircraft were compared with those derived from in-flight data gathered by a Learjet 36A outfitted with eight field mills. The values agreed to within experimental error (approx. 15 percent).

  2. Advanced Modeling Techniques to Study Anthropogenic Influences on Atmospheric Chemical Budgets

    Science.gov (United States)

    Mathur, Rohit

    1997-01-01

    This research work is a collaborative effort between research groups at MCNC and the University of North Carolina at Chapel Hill. The overall objective of this research is to improve the level of understanding of the processes that determine the budgets of chemically and radiatively active compounds in the atmosphere through development and application of advanced methods for calculating the chemical change in atmospheric models. The research performed during the second year of this project focused on four major aspects: (1) The continued development and refinement of multiscale modeling techniques to address the issue of the disparate scales of the physico-chemical processes that govern the fate of atmospheric pollutants; (2) Development and application of analysis methods utilizing process and mass balance techniques to increase the interpretive powers of atmospheric models and to aid in complementary analysis of model predictions and observations; (3) Development of meteorological and emission inputs for initial application of the chemistry/transport model over the north Atlantic region; and, (4) The continued development and implementation of a totally new adaptive chemistry representation that changes the details of what is represented as the underlying conditions change.

  3. Forecasting macroeconomic variables using neural network models and three automated model selection techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2016-01-01

    When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. To alleviate the problem, White (2006) presented a solution (Quick......Net) that converts the specification and nonlinear estimation problem into a linear model selection and estimation problem. We shall compare its performance to that of two other procedures building on the linearization idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting...

  4. A technique for generating consistent ice sheet initial conditions for coupled ice-sheet/climate models

    Directory of Open Access Journals (Sweden)

    J. G. Fyke

    2013-04-01

    Full Text Available A new technique for generating ice sheet preindustrial 1850 initial conditions for coupled ice-sheet/climate models is developed and demonstrated over the Greenland Ice Sheet using the Community Earth System Model (CESM. Paleoclimate end-member simulations and ice core data are used to derive continuous surface mass balance fields which are used to force a long transient ice sheet model simulation. The procedure accounts for the evolution of climate through the last glacial period and converges to a simulated preindustrial 1850 ice sheet that is geometrically and thermodynamically consistent with the 1850 preindustrial simulated CESM state, yet contains a transient memory of past climate that compares well to observations and independent model studies. This allows future coupled ice-sheet/climate projections of climate change that include ice sheets to integrate the effect of past climate conditions on the state of the Greenland Ice Sheet, while maintaining system-wide continuity between past and future climate simulations.

  5. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  6. The strut-and-tie models in reinforced concrete structures analysed by a numerical technique

    Directory of Open Access Journals (Sweden)

    V. S. Almeida

    Full Text Available The strut-and-tie models are appropriate to design and to detail certain types of structural elements in reinforced concrete and in regions of stress concentrations, called "D" regions. This is a good model representation of the structural behavior and mechanism. The numerical techniques presented herein are used to identify stress regions which represent the strut-and-tie elements and to quantify their respective efforts. Elastic linear plane problems are analyzed using strut-and-tie models by coupling the classical evolutionary structural optimization, ESO, and a new variant called SESO - Smoothing ESO, for finite element formulation. The SESO method is based on the procedure of gradual reduction of stiffness contribution of the inefficient elements at lower stress until it no longer has any influence. Optimal topologies of strut-and-tie models are presented in several instances with good settings comparing with other pioneer works allowing the design of reinforcement for structural elements.

  7. Hybrid OPC modeling with SEM contour technique for 10nm node process

    Science.gov (United States)

    Hitomi, Keiichiro; Halle, Scott; Miller, Marshal; Graur, Ioana; Saulnier, Nicole; Dunn, Derren; Okai, Nobuhiro; Hotta, Shoji; Yamaguchi, Atsuko; Komuro, Hitoshi; Ishimoto, Toru; Koshihara, Shunsuke; Hojo, Yutaka

    2014-03-01

    Hybrid OPC modeling is investigated using both CDs from 1D and simple 2D structures and contours extracted from complex 2D structures, which are obtained by a Critical Dimension-Scanning Electron Microscope (CD-SEM). Recent studies have addressed some of key issues needed for the implementation of contour extraction, including an edge detection algorithm consistent with conventional CD measurements, contour averaging and contour alignment. Firstly, pattern contours obtained from CD-SEM images were used to complement traditional site driven CD metrology for the calibration of OPC models for both metal and contact layers of 10 nm-node logic device, developed in Albany Nano-Tech. The accuracy of hybrid OPC model was compared with that of conventional OPC model, which was created with only CD data. Accuracy of the model, defined as total error root-mean-square (RMS), was improved by 23% with the use of hybrid OPC modeling for contact layer and 18% for metal layer, respectively. Pattern specific benefit of hybrid modeling was also examined. Resist shrink correction was applied to contours extracted from CD-SEM images in order to improve accuracy of the contours, and shrink corrected contours were used for OPC modeling. The accuracy of OPC model with shrink correction was compared with that without shrink correction, and total error RMS was decreased by 0.2nm (12%) with shrink correction technique. Variation of model accuracy among 8 modeling runs with different model calibration patterns was reduced by applying shrink correction. The shrink correction of contours can improve accuracy and stability of OPC model.

  8. Epineurial sheath tube (EST) technique: an experimental peripheral nerve repair model.

    Science.gov (United States)

    Bozkurt, Ahmet; Dunda, Sebastian E; Mon O'Dey, Dan; Brook, Gary A; Suschek, Christoph V; Pallua, Norbert

    2011-12-01

    Here we present the epineurial sheath tube (EST) technique as a modified microsurgical rat sciatic nerve model. The EST technique provides a cavity or pouch consisting of an outer epineurial sleeve that has been freed from nerve fascicles. This cavity may be appropriate to test the effectiveness and biocompatibility of implanted growth factors, cell suspensions (embedded in solutions or gels), or bioartificial nerve guide constructs. A total number of 10 rats underwent the surgical procedure for the EST technique. Cylinders made of fibrin gel served as implants and place-holders. Three animals were euthanized directly after operation, while the others survived for 6 weeks. After immersion fixation (3·9% glutaraldehyde), both conventional histology [semi-thin sections (1 μm), toluidine blue] and scanning electron microscopy were performed. Conventional histology and scanning electron microscopy of samples that had been fixed directly after the surgical procedure displayed the integrity of the closed epineurial tube with the fibrin cylinder in its center. Even after 6 weeks, the outer epineurium was not lacerated, the stitches did not loosen, and the lumen did not collapse, but remained open. The practicability of the EST technique could be verified regarding feasibility, reproducibility, mechanical stability, and openness of the lumen. The EST technique can be adapted to other nerve models (e.g. median or facial nerve). It provides a cavity or pouch, which can be used for different neuroscientific approaches including concepts to improve the therapeutic benefit of autologous nerve grafting or therapies to be used as an alternative to autologous nerve grafting.

  9. Comparison of QuadrapolarTM radiofrequency lesions produced by standard versus modified technique: an experimental model

    Directory of Open Access Journals (Sweden)

    Safakish R

    2017-06-01

    Full Text Available Ramin Safakish Allevio Pain Management Clinic, Toronto, ON, Canada Abstract: Lower back pain (LBP is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI joint pain is responsible for LBP in 18%–30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques. Keywords: lower back pain, radiofrequency ablation, sacroiliac joint, Quadrapolar radiofrequency ablation

  10. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  11. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    Science.gov (United States)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  12. Injection-moulded models of major and minor arteries: the variability of model wall thickness owing to casting technique.

    Science.gov (United States)

    O'Brien, T; Morris, L; O'Donnell, M; Walsh, M; McGloughlin, T

    2005-09-01

    Cardiovascular disease of major and minor arteries is a common cause of death in Western society. The wall mechanics and haemodynamics within the arteries are considered to be important factors in the disease formation process. This paper is concerned with the development of an efficient computer-integrated technique to manufacture idealized and realistic models of diseased major and minor arteries from radiological images and to address the issue of model wall thickness variability. Variations in wall thickness from the original computer models to the final castings are quantified using a CCD camera. The results found that wall thickness variation from the major and minor idealized artery models to design specification were insignificant, up to a maximum of 16 per cent. In realistic models, however, differences were up to 23 per cent in the major arterial models and 58 per cent in the minor arterial models, but the wall thickness variability remained within the limits of previously reported wall thickness results. It is concluded that the described injection moulding procedure yields idealized and realistic castings suitable for use in experimental investigations, with idealized models giving better agreement with design. Wall thickness is variable and should be assessed after the models are manufactured.

  13. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  14. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  15. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  16. Multiple Model Adaptive Estimation Techniques for Adaptive Model-Based Robot Control

    Science.gov (United States)

    1989-12-01

    Proportional Derivative (PD) or Propor- tional Integral Derivative (PID) feedback controller [6]. 1-1 The PD or PID controllers feedback the measured...Unfortunately, as the speed of the trajectory increases or the con- figuration of the robot changes, the PD or PID controllers cannot maintain track along the...desired trajectory. The main reason for poor tracking is that the PD and PID controllers were developed based on a simplified linear dynamics model

  17. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  18. A Multiple Criteria Decision Modelling approach to selection of estimation techniques for fitting extreme floods

    Science.gov (United States)

    Duckstein, L.; Bobée, B.; Ashkar, F.

    1991-09-01

    The problem of fitting a probability distribution, here log-Pearson Type III distribution, to extreme floods is considered from the point of view of two numerical and three non-numerical criteria. The six techniques of fitting considered include classical techniques (maximum likelihood, moments of logarithms of flows) and new methods such as mixed moments and the generalized method of moments developed by two of the co-authors. The latter method consists of fitting the distribution using moments of different order, in particular the SAM method (Sundry Averages Method) uses the moments of order 0 (geometric mean), 1 (arithmetic mean), -1 (harmonic mean) and leads to a smaller variance of the parameters. The criteria used to select the method of parameter estimation are: - the two statistical criteria of mean square error and bias; - the two computational criteria of program availability and ease of use; - the user-related criterion of acceptability. These criteria are transformed into value functions or fuzzy set membership functions and then three Multiple Criteria Decision Modelling (MCDM) techniques, namely, composite programming, ELECTRE, and MCQA, are applied to rank the estimation techniques.

  19. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  20. A simple technique for combining simplified models and its application to direct stop production

    CERN Document Server

    Barnard, James; French, Sky; White, Martin

    2014-01-01

    The results of many LHC searches for supersymmetric particles are interpreted using simplified models, in which one fixes the masses and couplings of most sparticles then scans over a few remaining masses of interest. We present a new technique for combining multiple simplified models (that requires no additional simulation) thereby demonstrating the utility and limitations of simplified models in general, and suggesting a simple way of improving LHC search strategies. The technique is used to derive limits on the stop mass that are model independent, modulo some reasonably generic assumptions which are quantified precisely. We find that current ATLAS and CMS results exclude stop masses up to 340 GeV for neutralino masses up to 120 GeV, provided that the total branching ratio into channels other than top-neutralino and bottom-chargino is small, and that there is no mass difference smaller than 10 GeV in the mass spectrum. In deriving these limits we place upper bounds on the branching ratios for complete stop...

  1. A modeling technique for active control design studies with application to spacecraft microvibrations.

    Science.gov (United States)

    Aglietti, G S; Gabriel, S B; Langley, R S; Rogers, E

    1997-10-01

    Microvibrations, at frequencies between 1 and 1000 Hz, generated by on board equipment, can propagate throughout a spacecraft structure and affect the performance of sensitive payloads. To investigate strategies to reduce these dynamic disturbances by means of active control systems, realistic yet simple structural models are necessary to represent the dynamics of the electromechanical system. In this paper a modeling technique which meets this requirement is presented, and the resulting mathematical model is used to develop some initial results on active control strategies. Attention is focused on a mass loaded panel subjected to point excitation sources, the objective being to minimize the displacement at an arbitrary output location. Piezoelectric patches acting as sensors and actuators are employed. The equations of motion are derived by using Lagrange's equation with vibration mode shapes as the Ritz functions. The number of sensors/actuators and their location is variable. The set of equations obtained is then transformed into state variables and some initial controller design studies are undertaken. These are based on standard linear systems optimal control theory where the resulting controller is implemented by a state observer. It is demonstrated that the proposed modeling technique is a feasible realistic basis for in-depth controller design/evaluation studies.

  2. Remotely sensed data assimilation technique to develop machine learning models for use in water management

    Science.gov (United States)

    Zaman, Bushra

    Increasing population and water conflicts are making water management one of the most important issues of the present world. It has become absolutely necessary to find ways to manage water more efficiently. Technological advancement has introduced various techniques for data acquisition and analysis, and these tools can be used to address some of the critical issues that challenge water resource management. This research used learning machine techniques and information acquired through remote sensing, to solve problems related to soil moisture estimation and crop identification on large spatial scales. In this dissertation, solutions were proposed in three problem areas that can be important in the decision making process related to water management in irrigated systems. A data assimilation technique was used to build a learning machine model that generated soil moisture estimates commensurate with the scale of the data. The research was taken further by developing a multivariate machine learning algorithm to predict root zone soil moisture both in space and time. Further, a model was developed for supervised classification of multi-spectral reflectance data using a multi-class machine learning algorithm. The procedure was designed for classifying crops but the model is data dependent and can be used with other datasets and hence can be applied to other landcover classification problems. The dissertation compared the performance of relevance vector and the support vector machines in estimating soil moisture. A multivariate relevance vector machine algorithm was tested in the spatio-temporal prediction of soil moisture, and the multi-class relevance vector machine model was used for classifying different crop types. It was concluded that the classification scheme may uncover important data patterns contributing greatly to knowledge bases, and to scientific and medical research. The results for the soil moisture models would give a rough idea to farmers

  3. Spindle speed variation technique in turning operations: Modeling and real implementation

    Science.gov (United States)

    Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.

    2016-11-01

    Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed ​​Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests.

  4. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  5. Modeling of 3D In—Building Propagation by Ray Tracing Technique

    Institute of Scientific and Technical Information of China (English)

    GongKe; XuRui

    1995-01-01

    The modeling of in-building propagation is of great importance for planning of indoor wireless networks.To model the transmission system comprising of transmitter,receiver and dif-ferent kinds of obstacles,ray tracing technique is used by taking a transmitter as a source launch-ing radio rays in different directions,some of these can reach the receiver through different paths with different path loss and delay,adding them together gives out the field strength at the receiv-ing point.Based on this model,computer simulation is carried out to predict the propagation loss and delay spread,it is shown that the simulation agrees well with the experiments.

  6. Formal verification technique for grid service chain model and its application

    Institute of Scientific and Technical Information of China (English)

    XU Ke; WANG YueXuan; WU Cheng

    2007-01-01

    Ensuring the correctness and reliability of large-scale resource sharing and complex job processing is an important task for grid applications. From a formal method perspective, a grid service chain model based on state Pi calculus is proposed in this work as the theoretical foundation for the service composition and collaboration in grid. Following the idea of the Web Service Resource Framework (WSRF), state Pi calculus enables the life-cycle management of system states by associating the actions in the original Pi calculus with system states. Moreover, model checking technique is exploited for the design-time and run-time logical verification of grid service chain models. A grid application scenario of the dynamic analysis of material deformation structure is also provided to show the effectiveness of the proposed work.

  7. Development og groundwater flow modeling techniques for the low-level radwaste disposal (III)

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Dae-Seok; Kim, Chun-Soo; Kim, Kyung-Soo; Park, Byung-Yoon; Koh, Yong-Kweon; Park, Hyun-Soo [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-12-01

    The project amis to establish the methodology of hydrogeologic assessment by the field application of the evaluation techniques gained and accumulated from the previous hydrogeological research works in Korea. The results of the project and their possible areas for application are (1) acquisition of detailed hydrogeologic information by using a borehole televiewer and a multipacker system, (2) establishing an integrated hydrogeological assessment method for fractured rocks, (3) acquisition of the fracture parameters for fracture modeling, (4) an inversion analysis of hydraulic parameters from fracture network modeling, (5) geostatistical methods for the spatial assignment of hydraulic parameters for fractured rocks, and (6) establishing the groundwater flow modeling procedure for a repository. 75 refs., 72 figs., 34 tabs. (Author)

  8. Active control and parameter updating techniques for nonlinear thermal network models

    Science.gov (United States)

    Papalexandris, M. V.; Milman, M. H.

    The present article reports on active control and parameter updating techniques for thermal models based on the network approach. Emphasis is placed on applications where radiation plays a dominant role. Examples of such applications are the thermal design and modeling of spacecrafts and space-based science instruments. Active thermal control of a system aims to approximate a desired temperature distribution or to minimize a suitably defined temperature-dependent functional. Similarly, parameter updating aims to update the values of certain parameters of the thermal model so that the output approximates a distribution obtained through direct measurements. Both problems are formulated as nonlinear, least-square optimization problems. The proposed strategies for their solution are explained in detail and their efficiency is demonstrated through numerical tests. Finally, certain theoretical results pertaining to the characterization of solutions of the problems of interest are also presented.

  9. Coarse-grained modeling of polystyrene at different concentrations using the Iterative Boltzmann Inversion technique

    Science.gov (United States)

    Bayramoglu, Beste; Faller, Roland

    2011-03-01

    We present systematic coarse-graining of several polystyrene models and test their performance under confinement and eventually in brush systems. The structural properties of a dilute polystyrene solution, a polystyrene melt and a confined concentrated polystyrene solution at 450K, 1 bar were investigated in detail by atomistic molecular dynamics simulations of these systems. Coarse-graining of the models was performed by Iterative Boltzmann Inversion Technique (IBI), in which the interaction potentials are optimized against the structure of the corresponding atomistically simulated systems. Radial distribution functions, bond, angle and dihedral angle probability distributions were calculated and compared to characterize the structure of the systems. Good agreement between the simulation results of the coarse-grained and atomistic models was observed.

  10. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  11. Modelling and Simulation of Digital Compensation Technique for dc-dc Converter by Pole Placement

    Science.gov (United States)

    Shenbagalakshmi, R.; Sree Renga Raja, T.

    2015-09-01

    A thorough and effective analysis of the dc-dc converters is carried out in order to achieve the system stability and to improve the dynamic performance. A small signal modelling based on state space averaging technique for dc-dc converters is carried out. A digital state feedback gain matrix is derived by pole placement technique in order to achieve the stability of a completely controllable system. A prediction observer for the dc-dc converters is designed and a dynamic compensation (observer plus control law) is provided using separation principle. The output is very much improved with zero output voltage ripples, zero peak overshoot, and much lesser settling time in the range of ms and with higher overall efficiency (>90 %).

  12. Modelling laser speckle photographs of decayed teeth by applying a digital image information technique

    Science.gov (United States)

    Ansari, M. Z.; da Silva, L. C.; da Silva, J. V. P.; Deana, A. M.

    2016-09-01

    We report on the application of a digital image model to assess early carious lesions on teeth. When decay is in its early stages, the lesions were illuminated with a laser and the laser speckle images were obtained. Due to the differences in the optical properties between healthy and carious tissue, both regions produced different scatter patterns. The digital image information technique allowed us to produce colour-coded 3D surface plots of the intensity information in the speckle images, where the height (on the z-axis) and the colour in the rendering correlate with the intensity of a pixel in the image. The quantitative changes in colour component density enhance the contrast between the decayed and sound tissue, and visualization of the carious lesions become significantly evident. Therefore, the proposed technique may be adopted in the early diagnosis of carious lesions.

  13. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    Science.gov (United States)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  14. A Comparative Modelling Study of PWM Control Techniques for Multilevel Cascaded Inverter

    Directory of Open Access Journals (Sweden)

    A. TAHRI

    2005-01-01

    Full Text Available The emergence of multilevel converters has been in increase since the last decade. These new types of converters are suitable for high voltage and high power application due to their ability to synthesize waveforms with better harmonic spectrum. Numerous topologies have been introduced and widely studied for utility and drive applications. Amongst these topologies, the multilevel cascaded inverter was introduced in Static Var compensation and drive systems. This paper investigates several control techniques applied to the Multilevel Cascaded Inverter in order to ensure an efficient voltage utilization and better harmonic spectrum. A modelling and control strategy of a single phase Multilevel Cascaded Inverter is also investigated. Computer simulation results using Matlab program are reported and discussed together with a comparative study of the different control techniques of multilevel cascaded inverter.Moreover, experimental results are carried out on a scaled down prototype to prove the effectiveness of the proposed analysis.

  15. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    Science.gov (United States)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  16. An algorithmic technique for a class of queueing models with packet switching applications

    Science.gov (United States)

    Morris, R. J. T.

    The need to analyze the performance of a class of hardware packet switches leads to consideration of a queueing model consisting of a single server queue with input given by a function of a Markov chain. An algorithmic technique is developed to obtain the joint stationary distribution of the bivariate Markov chain which describes the system. Both the infinite and finite capacity cases are considered. The technique is used to study several design issues which arise when a packet switch is subjected to independent streams of bursty input traffic. Guidelines are given which aid in estimating the queueing space required to keep traffic losses acceptably small. It is seen that the commonly employed heuristic of dimensioning queueing space using the tail of the infinite capacity distribution can lead to considerable error when compared with exact results.

  17. A Temporal Millimeter Wave Propagation Model for Tunnels Using Ray Frustum Techniques and FFT

    Directory of Open Access Journals (Sweden)

    Choonghyen Kwon

    2014-01-01

    Full Text Available A temporal millimeter wave propagation model for tunnels is presented using ray frustum techniques and fast Fourier transform (FFT. To directly estimate or simulate effects of millimeter wave channel properties on the performance of communication services, time domain impulse responses of demodulated signals should be obtained, which needs rather large computation time. To mitigate the computational burden, ray frustum techniques are used to obtain frequency domain transfer function of millimeter wave propagation environment and FFT of equivalent low pass signals are used to retrieve demodulated waveforms. This approach is numerically efficient and helps to directly estimate impact of tunnel structures and surfaces roughness on the performance of millimeter wave communication services.

  18. Mathematical Model and Artificial Intelligent Techniques Applied to a Milk Industry through DSM

    Science.gov (United States)

    Babu, P. Ravi; Divya, V. P. Sree

    2011-08-01

    The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.

  19. High Dimensional ODEs Coupled with Mixed-Effects Modeling Techniques for Dynamic Gene Regulatory Network Identification.

    Science.gov (United States)

    Lu, Tao; Liang, Hua; Li, Hongzhe; Wu, Hulin

    2011-01-01

    Gene regulation is a complicated process. The interaction of many genes and their products forms an intricate biological network. Identification of this dynamic network will help us understand the biological process in a systematic way. However, the construction of such a dynamic network is very challenging for a high-dimensional system. In this article we propose to use a set of ordinary differential equations (ODE), coupled with dimensional reduction by clustering and mixed-effects modeling techniques, to model the dynamic gene regulatory network (GRN). The ODE models allow us to quantify both positive and negative gene regulations as well as feedback effects of one set of genes in a functional module on the dynamic expression changes of the genes in another functional module, which results in a directed graph network. A five-step procedure, Clustering, Smoothing, regulation Identification, parameter Estimates refining and Function enrichment analysis (CSIEF) is developed to identify the ODE-based dynamic GRN. In the proposed CSIEF procedure, a series of cutting-edge statistical methods and techniques are employed, that include non-parametric mixed-effects models with a mixture distribution for clustering, nonparametric mixed-effects smoothing-based methods for ODE models, the smoothly clipped absolute deviation (SCAD)-based variable selection, and stochastic approximation EM (SAEM) approach for mixed-effects ODE model parameter estimation. The key step, the SCAD-based variable selection of the proposed procedure is justified by investigating its asymptotic properties and validated by Monte Carlo simulations. We apply the proposed method to identify the dynamic GRN for yeast cell cycle progression data. We are able to annotate the identified modules through function enrichment analyses. Some interesting biological findings are discussed. The proposed procedure is a promising tool for constructing a general dynamic GRN and more complicated dynamic networks.

  20. A comparison of modelling techniques for computing wall stress in abdominal aortic aneurysms

    Directory of Open Access Journals (Sweden)

    McGloughlin Timothy M

    2007-10-01

    Full Text Available Abstract Background Aneurysms, in particular abdominal aortic aneurysms (AAA, form a significant portion of cardiovascular related deaths. There is much debate as to the most suitable tool for rupture prediction and interventional surgery of AAAs, and currently maximum diameter is used clinically as the determining factor for surgical intervention. Stress analysis techniques, such as finite element analysis (FEA to compute the wall stress in patient-specific AAAs, have been regarded by some authors to be more clinically important than the use of a "one-size-fits-all" maximum diameter criterion, since some small AAAs have been shown to have higher wall stress than larger AAAs and have been known to rupture. Methods A patient-specific AAA was selected from our AAA database and 3D reconstruction was performed. The AAA was then modelled in this study using three different approaches, namely, AAA(SIMP, AAA(MOD and AAA(COMP, with each model examined using linear and non-linear material properties. All models were analysed using the finite element method for wall stress distributions. Results Wall stress results show marked differences in peak wall stress results between the three methods. Peak wall stress was shown to reduce when more realistic parameters were utilised. It was also noted that wall stress was shown to reduce by 59% when modelled using the most accurate non-linear complex approach, compared to the same model without intraluminal thrombus. Conclusion The results here show that using more realistic parameters affect resulting wall stress. The use of simplified computational modelling methods can lead to inaccurate stress distributions. Care should be taken when examining stress results found using simplified techniques, in particular, if the wall stress results are to have clinical importance.

  1. A multiscale modeling technique for bridging molecular dynamics with finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yongchang, E-mail: yl83@buffalo.edu; Basaran, Cemal

    2013-11-15

    In computational mechanics, molecular dynamics (MD) and finite element (FE) analysis are well developed and most popular on nanoscale and macroscale analysis, respectively. MD can very well simulate the atomistic behavior, but cannot simulate macroscale length and time due to computational limits. FE can very well simulate continuum mechanics (CM) problems, but has the limitation of the lack of atomistic level degrees of freedom. Multiscale modeling is an expedient methodology with a potential to connect different levels of modeling such as quantum mechanics, molecular dynamics, and continuum mechanics. This study proposes a new multiscale modeling technique to couple MD with FE. The proposed method relies on weighted average momentum principle. A wave propagation example has been used to illustrate the challenges in coupling MD with FE and to verify the proposed technique. Furthermore, 2-Dimensional problem has also been used to demonstrate how this method would translate into real world applications. -- Highlights: •A weighted averaging momentum method is introduced for bridging molecular dynamics (MD) with finite element (FE) method. •The proposed method shows excellent coupling results in 1-D and 2-D examples. •The proposed method successfully reduces the spurious wave reflection at the border of MD and FE regions. •Big advantages of the proposed method are simplicity and inexpensive computational cost of multiscale analysis.

  2. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  3. Model Calibration of a Groundwater Flow Analysis for an Underground Structure Using Data Assimilation Technique

    Science.gov (United States)

    Yamamoto, S.; Honda, M.; Sakurai, H.

    2015-12-01

    Model calibration of groundwater flow analysis is a difficult task, especially in the complicated hydrogeological condition, because available information about hydrogeological properties is very limited. This often causes non-negligible differences between predicted results and real observations. We applied the Ensemble Kalman Filter (EnKF), which is a type of data assimilation technique, to groundwater flow simulation in order to obtain a valid model that can reproduce accurately the observations. Unlike conventional manual calibration, this scheme not only makes the calibration work efficient but also provides an objective approach not depending on the skills of engineers.In this study, we focused on estimating hydraulic conductivities of bedrocks and fracture zones around an underground fuel storage facility. Two different kinds of groundwater monitoring data were sequentially assimilated into the unsteady groundwater flow model via the EnKF.Synthetic test results showed that estimated hydraulic conductivities matched their true values and our method works well in groundwater flow analysis. Further, influences of each observation in the state updating process were quantified through sensitivity analysis.To assess the feasibility under practical conditions, the assimilation experiments using real field measurements were performed. The results showed that the identified model was able to approximately simulate the behavior of groundwater flow. On the other hand, it was difficult to reproduce the observation data correctly in a specific local area. This suggests that inaccurate area is included in the assumed hydrogeological conceptual model of this site, and could be useful information for the model validation.

  4. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    Energy Technology Data Exchange (ETDEWEB)

    Gwo, J.P.; Toran, L.E.; Morris, M.D. [Oak Ridge National Lab., TN (United States); Wilson, G.V. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Plant and Soil Science

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone.

  5. ADBT Frame Work as a Testing Technique: An Improvement in Comparison with Traditional Model Based Testing

    Directory of Open Access Journals (Sweden)

    Mohammed Akour

    2016-05-01

    Full Text Available Software testing is an embedded activity in all software development life cycle phases. Due to the difficulties and high costs of software testing, many testing techniques have been developed with the common goal of testing software in the most optimal and cost-effective manner. Model-based testing (MBT is used to direct testing activities such as test verification and selection. MBT is employed to encapsulate and understand the behavior of the system under test, which supports and helps software engineers to validate the system with various likely actions. The widespread usage of models has influenced the usage of MBT in the testing process, especially with UML. In this research, we proposed an improved model based testing strategy, which involves and uses four different diagrams in the testing process. This paper also discusses and explains the activities in the proposed model with the finite state model (FSM. The comparisons have been done with traditional model based testings in terms of test case generation and result.

  6. A model identification technique to characterize the low frequency behaviour of surrogate explosive materials

    Science.gov (United States)

    Paripovic, Jelena; Davies, Patricia

    2016-09-01

    The mechanical response of energetic materials, especially those used in improvised explosive devices, is of great interest to improve understanding of how mechanical excitations may lead to improved detection or detonation. The materials are comprised of crystals embedded into a binder. Microstructural modelling can give insight into the interactions between the binder and the crystals and thus the mechanisms that may lead to material heating and but there needs to be validation of these models and they also require estimates of constituent material properties. Addressing these issues, nonlinear viscoelastic models of the low frequency behavior of a surrogate material-mass system undergoing base excitation have been constructed, and experimental data have been collected and used to estimate the order of components in the system model and the parameters in the model. The estimation technique is described and examples of its application to both simulated and experimental data are given. From the estimated system model the material properties are extracted. Material properties are estimated for a variety of materials and the effect of aging on the estimated material properties is shown.

  7. Dynamic drought risk assessment using crop model and remote sensing techniques

    Science.gov (United States)

    Sun, H.; Su, Z.; Lv, J.; Li, L.; Wang, Y.

    2017-02-01

    Drought risk assessment is of great significance to reduce the loss of agricultural drought and ensure food security. The normally drought risk assessment method is to evaluate its exposure to the hazard and the vulnerability to extended periods of water shortage for a specific region, which is a static evaluation method. The Dynamic Drought Risk Assessment (DDRA) is to estimate the drought risk according to the crop growth and water stress conditions in real time. In this study, a DDRA method using crop model and remote sensing techniques was proposed. The crop model we employed is DeNitrification and DeComposition (DNDC) model. The drought risk was quantified by the yield losses predicted by the crop model in a scenario-based method. The crop model was re-calibrated to improve the performance by the Leaf Area Index (LAI) retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) data. And the in-situ station-based crop model was extended to assess the regional drought risk by integrating crop planted mapping. The crop planted area was extracted with extended CPPI method from MODIS data. This study was implemented and validated on maize crop in Liaoning province, China.

  8. A Lanczos model-order reduction technique to efficiently simulate electromagnetic wave propagation in dispersive media

    Science.gov (United States)

    Zimmerling, Jörn; Wei, Lei; Urbach, Paul; Remis, Rob

    2016-06-01

    In this paper we present a Krylov subspace model-order reduction technique for time- and frequency-domain electromagnetic wave fields in linear dispersive media. Starting point is a self-consistent first-order form of Maxwell's equations and the constitutive relation. This form is discretized on a standard staggered Yee grid, while the extension to infinity is modeled via a recently developed global complex scaling method. By applying this scaling method, the time- or frequency-domain electromagnetic wave field can be computed via a so-called stability-corrected wave function. Since this function cannot be computed directly due to the large order of the discretized Maxwell system matrix, Krylov subspace reduced-order models are constructed that approximate this wave function. We show that the system matrix exhibits a particular physics-based symmetry relation that allows us to efficiently construct the time- and frequency-domain reduced-order models via a Lanczos-type reduction algorithm. The frequency-domain models allow for frequency sweeps meaning that a single model provides field approximations for all frequencies of interest and dominant field modes can easily be determined as well. Numerical experiments for two- and three-dimensional configurations illustrate the performance of the proposed reduction method.

  9. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  10. Nano-scale CMOS analog circuits models and CAD techniques for high-level design

    CERN Document Server

    Pandit, Soumya; Patra, Amit

    2014-01-01

    Reliability concerns and the limitations of process technology can sometimes restrict the innovation process involved in designing nano-scale analog circuits. The success of nano-scale analog circuit design requires repeat experimentation, correct analysis of the device physics, process technology, and adequate use of the knowledge database.Starting with the basics, Nano-Scale CMOS Analog Circuits: Models and CAD Techniques for High-Level Design introduces the essential fundamental concepts for designing analog circuits with optimal performances. This book explains the links between the physic

  11. Linear Sigma Model at Finite Temperature and Baryonic Chemical Potential Using the N-Midpoint Technique

    Directory of Open Access Journals (Sweden)

    M. Abu-Shady

    2014-01-01

    Full Text Available A baryonic chemical potential (μb is included in the linear sigma model at finite temperature. The effective mesonic potential is numerically calculated using the N-midpoint rule. The meson masses are investigated as functions of the temperature (T at fixed value of baryonic chemical potential. The pressure and energy density are investigated as functions of temperature at fi…xed value of μb. The obtained results are in good agreement in comparison with other techniques. We conclude that the calculated effective potential successfully predicts the meson properties and thermodynamic properties at finite baryonic chemical potential.

  12. Feature Specific Criminal Mapping using Data Mining Techniques and Generalized Gaussian Mixture Model

    OpenAIRE

    Uttam Mande; Y. Srinivas; Murthy, J. V. R.

    2012-01-01

    Lot of research is projected to map the criminal with that of crime and it is observed that there is still a huge increase in the crime rate due to the gap between the optimal usage of technologies and investigation. This has given scope for the development of new methodologies in the area of crime investigation using the techniques based on data mining, image processing, forensic, and social mining. In this paper, presents a model using new methodology for mapping the criminal with the crime...

  13. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  14. Shape modeling technique KOALA validated by ESA Rosetta at (21) Lutetia

    Science.gov (United States)

    Carry, B.; Kaasalainen, M.; Merline, W. J.; Müller, T. G.; Jorda, L.; Drummond, J. D.; Berthier, J.; O'Rourke, L.; Ďurech, J.; Küppers, M.; Conrad, A.; Tamblyn, P.; Dumas, C.; Sierks, H.; Osiris Team

    2012-06-01

    We present here a comparison of our results from ground-based observations of asteroid (21) Lutetia with imaging data acquired during the flyby of the asteroid by the ESA Rosetta mission. This flyby provided a unique opportunity to evaluate and calibrate our method of determination of size, 3-D shape, and spin of an asteroid from ground-based observations. Knowledge of certain observable physical properties of small bodies (e.g., size, spin, 3-D shape, and density) have far-reaching implications in furthering our understanding of these objects, such as composition, internal structure, and the effects of non-gravitational forces. We review the different observing techniques used to determine the above physical properties of asteroids and present our 3-D shape-modeling technique KOALA - Knitted Occultation, Adaptive-optics, and Lightcurve Analysis - which is based on multi-dataset inversion. We compare the results we obtained with KOALA, prior to the flyby, on asteroid (21) Lutetia with the high-spatial resolution images of the asteroid taken with the OSIRIS camera on-board the ESA Rosetta spacecraft, during its encounter with Lutetia on 2010 July 10. The spin axis determined with KOALA was found to be accurate to within 2°, while the KOALA diameter determinations were within 2% of the Rosetta-derived values. The 3-D shape of the KOALA model is also confirmed by the spectacular visual agreement between both 3-D shape models (KOALA pre- and OSIRIS post-flyby). We found a typical deviation of only 2 km at local scales between the profiles from KOALA predictions and OSIRIS images, resulting in a volume uncertainty provided by KOALA better than 10%. Radiometric techniques for the interpretation of thermal infrared data also benefit greatly from the KOALA shape model: the absolute size and geometric albedo can be derived with high accuracy, and thermal properties, for example the thermal inertia, can be determined unambiguously. The corresponding Lutetia analysis leads

  15. Applying Intelligent Computing Techniques to Modeling Biological Networks from Expression Data

    Institute of Scientific and Technical Information of China (English)

    Wei-Po Lee; Kung-Cheng Yang

    2008-01-01

    Constructing biological networks is one of the most important issues in system sbiology. However, constructing a network from data manually takes a considerable large amount of time, therefore an automated procedure is advocated. To automate the procedure of network construction, in this work we use two intelligent computing techniques, genetic programming and neural computation, to infer two kinds of network models that use continuous variables. To verify the presented approaches, experiments have been conducted and the preliminary results show that both approaches can be used to infer networks successfully.

  16. Regressions by leaps and bounds and biased estimation techniques in yield modeling

    Science.gov (United States)

    Marquina, N. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. It was observed that OLS was not adequate as an estimation procedure when the independent or regressor variables were involved in multicollinearities. This was shown to cause the presence of small eigenvalues of the extended correlation matrix A'A. It was demonstrated that the biased estimation techniques and the all-possible subset regression could help in finding a suitable model for predicting yield. Latent root regression was an excellent tool that found how many predictive and nonpredictive multicollinearities there were.

  17. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik;

    2015-01-01

    provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  18. A controlled field pilot for testing near surface CO2 detection techniques and transport models

    Science.gov (United States)

    Spangler, L.H.; Dobeck, L.M.; Repasky, K.; Nehrir, A.; Humphries, S.; Keith, C.; Shaw, J.; Rouse, J.; Cunningham, A.; Benson, S.; Oldenburg, C.M.; Lewicki, J.L.; Wells, A.; Diehl, R.; Strazisar, B.; Fessenden, J.; Rahn, Thomas; Amonette, J.; Barr, J.; Pickles, W.; Jacobson, J.; Silver, E.; Male, E.; Rauch, H.; Gullickson, K.; Trautz, R.; Kharaka, Y.; Birkholzer, J.; Wielopolski, L.

    2009-01-01

    A field facility has been developed to allow controlled studies of near surface CO2 transport and detection technologies. The key component of the facility is a shallow, slotted horizontal well divided into six zones. The scale and fluxes were designed to address large scale CO2 storage projects and desired retention rates for those projects. A wide variety of detection techniques were deployed by collaborators from 6 national labs, 2 universities, EPRI, and the USGS. Additionally, modeling of CO2 transport and concentrations in the saturated soil and in the vadose zone was conducted. An overview of these results will be presented. ?? 2009 Elsevier Ltd. All rights reserved.

  19. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Directory of Open Access Journals (Sweden)

    Yang-Cheng Lin

    2012-01-01

    Full Text Available How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers’ perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique, and neural networks (the nonlinear modeling technique to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers’ perception of product image and product form elements of personal digital assistants (PDAs. The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  20. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  1. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  2. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    Energy Technology Data Exchange (ETDEWEB)

    Magee, Derek [School of Computing, University of Leeds, Leeds (United Kingdom); Tanner, Steven F; Jeavons, Alan P [Division of Medical Physics, University of Leeds, Leeds (United Kingdom); Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis, E-mail: D.R.Magee@leeds.ac.u [Leeds Teaching Hospitals NHS Trust, Leeds (United Kingdom)

    2010-08-21

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  3. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    Science.gov (United States)

    Magee, Derek; Tanner, Steven F.; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis; Jeavons, Alan P.

    2010-08-01

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  4. Interactions of a non-fluorescent fluoroquinolone with biological membrane models: A multi-technique approach.

    Science.gov (United States)

    Sousa, Carla F; Ferreira, Mariana; Abreu, Bárbara; Medforth, Craig J; Gameiro, Paula

    2015-11-30

    Fluoroquinolones are antibiotics which act by penetrating into bacterial cells and inhibiting enzymes related to DNA replication, and metal complexes of these drugs have recently been investigated as one approach to counteracting bacterial resistance. In this work, we apply a multi-technique approach to studying the partition coefficient (Kp) for the non-fluorescent third-generation fluoroquinolone sparfloxacin or its copper-complex with lipid membrane models of Gram-negative bacteria. The techniques investigated are UV-vis absorption and (19)F NMR spectroscopies together with quenching of a fluorescent probe present in the lipids (using steady-state and time-resolved methods). (19)F NMR spectroscopy has previously been used to determine the Kp values of fluorinated drugs but in the case of sparfloxacin did not yield useful data. However, similar Kp values for sparfloxacin or its copper-complex were obtained for the absorption and fluorescence quenching methods confirming the usefulness of a multi-technique approach. The Kp values measured for sparfloxacin were significantly higher than those found for other fluoroquinolones. In addition, similar Kp values were found for sparfloxacin and copper-complex suggesting that in contrast to other fluoroquinolones hydrophobic diffusion occurs readily for both of these molecules. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Cellular Imaging Using Equivalent Cross-Relaxation Rate Technique in Rabbit VX-2 Tumor Model.

    Science.gov (United States)

    Nishiofuku, Hideyuki; Matsushima, Shigeru; Taguchi, Osamu; Inaba, Yoshitaka; Yamaura, Hidekazu; Sato, Yozo; Tanaka, Toshihiro; Kichikawa, Kimihiko

    2011-01-01

    Equivalent cross-relaxation rate (ECR) imaging (ECRI) is a measurement technique that can be used to quantitatively evaluate changes in structural organization and cellular density by MRI. The aim of this study was to evaluate the correlation between the ECR value and cellular density in the rabbit VX2 tumor model. Five rabbits implanted with 10 VX2 tumors in the femur muscles were included in this study. We adopted the off-resonance technique with a single saturation transfer pulse frequency of 7 ppm downfield from water resonance. The ECR value was defined as the percentage of signal loss between the unsaturated and saturated images. ECR images were constructed based on the percentage of the ECR value. Pathological specimens were divided into 34 areas and classified into two groups: the viable group and the necrotic group. ECR values were measured and compared between groups. The correlation between the ECR value and cellular density was then determined. The mean ECR value was significantly higher in the viable group than in the necrotic group (61.2% vs. 35.8%). The area under the curve that calculated by receiver operating characteristic curve was 0.991 at 7 ppm. The regression graph showed a linear relationship between the ECR value and cellular density; the correlation coefficient (r) was 0.858. There is a strong association between the ECR value and cellular density in VX2 tumors and so ECRI could be a potentially useful technique for accurately depicting viable and necrotic areas.

  6. Surrogate-based modeling and dimension reduction techniques for multi-scale mechanics problems

    Institute of Scientific and Technical Information of China (English)

    Wei Shyy; Young-Chang Cho; Wenbo Du; Amit Gupta; Chien-Chou Tseng; Ann Marie Sastry

    2011-01-01

    Successful modeling and/or design of engineering systems often requires one to address the impact of multiple “design variables” on the prescribed outcome.There are often multiple,competing objectives based on which we assess the outcome of optimization.Since accurate,high fidelity models are typically time consuming and computationally expensive,comprehensive evaluations can be conducted only if an efficient framework is available.Furthermore,informed decisions of the model/hardware's overall performance rely on an adequate understanding of the global,not local,sensitivity of the individual design variables on the objectives.The surrogate-based approach,which involves approximating the objectives as continuous functions of design variables from limited data,offers a rational framework to reduce the number of important input variables,i.e.,the dimension of a design or modeling space.In this paper,we review the fundamental issues that arise in surrogate-based analysis and optimization,highlighting concepts,methods,techniques,as well as modeling implications for mechanics problems.To aid the discussions of the issues involved,we summarize recent efforts in investigating cryogenic cavitating flows,active flow control based on dielectric barrier discharge concepts,and lithium (Li)-ion batteries.It is also stressed that many multi-scale mechanics problems can naturally benefit from the surrogate approach for “scale bridging.”

  7. A technique for estimating maximum harvesting effort in a stochastic fishery model

    Indian Academy of Sciences (India)

    Ram Rup Sarkar; J Chattopadhayay

    2003-06-01

    Exploitation of biological resources and the harvest of population species are commonly practiced in fisheries, forestry and wild life management. Estimation of maximum harvesting effort has a great impact on the economics of fisheries and other bio-resources. The present paper deals with the problem of a bioeconomic fishery model under environmental variability. A technique for finding the maximum harvesting effort in fluctuating environment has been developed in a two-species competitive system, which shows that under realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain the inaccessible parameters of the system in a systematic way. Such studies may help resource managers to get an idea for controlling the system.

  8. Electroless-plating technique for fabricating thin-wall convective heat-transfer models

    Science.gov (United States)

    Avery, D. E.; Ballard, G. K.; Wilson, M. L.

    1984-01-01

    A technique for fabricating uniform thin-wall metallic heat-transfer models and which simulates a Shuttle thermal protection system tile is described. Two 6- by 6- by 2.5-in. tiles were fabricated to obtain local heat transfer rates. The fabrication process is not limited to any particular geometry and results in a seamless thin-wall heat-transfer model which uses a one-wire thermocouple to obtain local cold-wall heat-transfer rates. The tile is relatively fragile because of the brittle nature of the material and the structural weakness of the flat-sided configuration; however, a method was developed and used for repairing a cracked tile.

  9. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    CERN Document Server

    Lichtenberg, Tim

    2015-01-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. Numerical approaches provide a method to tackle the weaknesses of current planet formation models and are an important tool to close gaps in poorly constrained areas. We present a global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code ENZO. With this setup, we explore the impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison, and study the effects of varying the disk radius. Adopting a marginally stable disk profile (Q_init=1), we validate the...

  10. Remote Sensing of Grass Response to Drought Stress Using Spectroscopic Techniques and Canopy Reflectance Model Inversion

    Directory of Open Access Journals (Sweden)

    Bagher Bayat

    2016-07-01

    Full Text Available The aim of this study was to follow the response to drought stress in a Poa pratensis canopy exposed to various levels of soil moisture deficit. We tracked the changes in the canopy reflectance (450–2450 nm and retrieved vegetation properties (Leaf Area Index (LAI, leaf chlorophyll content (Cab, leaf water content (Cw, leaf dry matter content (Cdm and senescent material (Cs during a drought episode. Spectroscopic techniques and radiative transfer model (RTM inversion were employed to monitor the gradual manifestation of drought effects in a laboratory setting. Plots of 21 cm × 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were divided into a well-watered control group and a group subjected to water stress for 36 days. In a regular weekly schedule, canopy reflectance and destructive measurements of LAI and Cab were taken. Spectral analysis indicated the first sign of stress after 4–5 days from the start of the experiment near the water absorption bands (at 1930 nm, 1440 nm and in the red (at 675 nm. Spectroscopic techniques revealed plant stress up to 6 days earlier than visual inspection. Of the water stress-related vegetation indices, the response of Normalized Difference Water Index (NDWI_1241 and Normalized Photochemical Reflectance Index (PRI_norm were significantly stronger in the stressed group than the control. To observe the effects of stress on grass properties during the drought episode, we used the RTMo (RTM of solar and sky radiation model inversion by means of an iterative optimization approach. The performance of the model inversion was assessed by calculating R2 and the Normalized Root Mean Square Error (RMSE between retrieved and measured LAI (R2 = 0.87, NRMSE = 0.18 and Cab (R2 = 0.74, NRMSE = 0.15. All parameters retrieved by model inversion co-varied with soil moisture deficit. However, the first strong sign of water stress on the retrieved grass properties was detected as a change of Cw

  11. A Study of Recently Developed MCMC Techniques for Efficiently Characterizing the Uncertainty of Hydrologic Models

    Science.gov (United States)

    Marshall, L. A.; Smith, T. J.

    2008-12-01

    The implementation of Bayesian methods, and specifically Markov chain Monte Carlo (MCMC) methods, are becoming much more widespread due to their usefulness in uncertainty assessment of hydrologic models. These methods have the ability to explicitly account for non-stationarities in model errors (via the likelihood), complex parameter interdependence and uncertainty, and multiple sources of data for model conditioning. These properties hold particular importance for hydrologic models where we need to characterize complex model errors (including heteroscedasticity and correlation) and where a full assessment of the uncertainty associated with the modeled results is desirable. Traditional MCMC algorithms can be difficult to implement due to computational constraints for high-dimensional models with complex parameter spaces and expensive model functions. Failure to effectively explore the parameter space can lead to false convergence to a local optimum and a misunderstanding of the model's ability to characterize the system. While past studies have shown adaptive MCMC techniques to be more desirable than traditional MCMC approaches, few hydrologic studies have taken advantage of these new advances, given their varying difficulty in implementation. We investigated three recently developed MCMC algorithms, the Adaptive Metropolis (AM), the Delayed Rejection Adaptive Metropolis (DRAM) and the Differential Evolution Markov Chain (DE-MC). These algorithms are newly devised and intended to better handle issues common to hydrologic modeling including multi-modality of parameter spaces, complex parameter interactions, and the computational cost associated with potentially expensive hydrologic functions. We evaluated each algorithm through application to two case studies; (1) a synthetic Gaussian mixture with five parameters and two modes and (2) a nine-dimensional snowmelt-hydrologic modeling study applied to an experimental watershed. Each of the three algorithms was compared

  12. The combination of satellite observation techniques for sequential ionosphere VTEC modeling

    Science.gov (United States)

    Erdogan, Eren; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Dettmering, Denise; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte; Mrotzek, Niclas

    2016-04-01

    The project OPTIMAP is a joint initiative by the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University of Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal is to develop an operational tool for ionospheric mapping and prediction (OPTIMAP). A key feature of the project is the combination of different satellite observation techniques to improve the spatio-temporal data coverage and the sensitivity for selected target parameters. In the current status, information about the vertical total electron content (VTEC) is derived from the dual frequency signal processing of four techniques: (1) Terrestrial observations of GPS and GLONASS ensure the high-resolution coverage of continental regions, (2) the satellite altimetry mission Jason-2 is taken into account to provide VTEC in nadir direction along the satellite tracks over the oceans, (3) GPS radio occultations to Formosat-3/COSMIC are exploited for the retrieval of electron density profiles that are integrated to obtain VTEC and (4) Jason-2 carrier-phase observations tracked by the on-board DORIS receiver are processed to determine the relative VTEC. All measurements are sequentially pre-processed in hourly batches serving as input data of a Kalman filter (KF) for modeling the global VTEC distribution. The KF runs in a predictor-corrector mode allowing for the sequential processing of the measurements where update steps are performed with one-minute sampling in the current configuration. The spatial VTEC distribution is represented by B-spline series expansions, i.e., the corresponding B-spline series coefficients together with additional technique-dependent unknowns such as Differential Code Biases and Intersystem Biases are estimated by the KF. As a preliminary solution, the prediction model to propagate the filter state through time is defined by a random

  13. Advancing In Situ Modeling of ICMEs: New Techniques for New Observations

    CERN Document Server

    Mulligan, Tamitha; Lynch, Benjamin J

    2012-01-01

    It is generally known that multi-spacecraft observations of interplanetary coronal mass ejections (ICMEs) more clearly reveal their three-dimensional structure than do observations made by a single spacecraft. The launch of the STEREO twin observatories in October 2006 has greatly increased the number of multipoint studies of ICMEs in the literature, but this field is still in its infancy. To date, most studies continue to use on flux rope models that rely on single track observations through a vast, multi-faceted structure, which oversimplifies the problem and often hinders interpretation of the large-scale geometry, especially for cases in which one spacecraft observes a flux rope, while another does not. In order to tackle these complex problems, new modeling techniques are required. We describe these new techniques and analyze two ICMEs observed at the twin STEREO spacecraft on 22-23 May 2007, when the spacecraft were separated by ~8 degrees. We find a combination of non-force-free flux rope multi-spacecr...

  14. Development of Modeling and Signal Processing Techniques for Nondestructive Testing of Concrete Structures

    Energy Technology Data Exchange (ETDEWEB)

    Woo, S.K.; Song, Y.C. [Korea Electric Power Research Institute, Taejeon (Korea); Rhim, H.C. [Yonsei University, Seoul (Korea)

    2001-07-01

    Radar method has a potential of being a powerful and effective tool for nondestructive testing(NDT) of concrete structures, roadways, tunnels and airport pavements. Yet, not all of the available features of the method have been fully developed. The advancement of the method can be achieved through the study of electromagnetic properties of concrete, development of computer simulation techniques for radar measurements, application of appropriate radar hardware systems for specific problem areas, and implementation of proper imaging algorithms for the processing of radar measurement data. In this paper, a numerical modeling technique of finite difference-time domain (FD-TD) method has been applied to simulate radar measurements of concrete structures for NDT. The modeling work is found to be useful in predicting radar measurement signal for thickness detection, rebar detection and the detection of delamination inside concrete. Also, an imaging scheme has been developed and proposed for the use of radar in detecting steel reinforcing bars embedded inside concrete. The scheme utilizes the measured data of electromagnetic properties of concrete and impedance mismatch between concrete and the steel bar. The results have shown improved output of the radar measurement compared to commercially available processing methods. (author). 8 refs., 15 figs.

  15. Comparative quantification of oxygen release by wetland plants: electrode technique and oxygen consumption model.

    Science.gov (United States)

    Wu, Haiming; Liu, Jufeng; Zhang, Jian; Li, Cong; Fan, Jinlin; Xu, Xiaoli

    2014-01-01

    Understanding oxygen release by plants is important to the design of constructed wetlands for wastewater treatment. Lab-scale systems planted with Phragmites australis were studied to evaluate the amount of oxygen release by plants using electrode techniques and oxygen consumption model. Oxygen release rate (0.14 g O2/m(2)/day) measured using electrode techniques was much lower than that (3.94-25.20 gO2/m(2)/day) calculated using the oxygen consumption model. The results revealed that oxygen release by plants was significantly influenced by the oxygen demand for the degradation of pollutants, and the oxygen release rate increased with the rising of the concentration of degradable materials in the solution. The summary of the methods in qualifying oxygen release by wetland plants demonstrated that variations existed among different measuring methods and even in the same measuring approach. The results would be helpful for understanding the contribution of plants in constructed wetlands toward actual wastewater treatment.

  16. Compensation technique for Q-limit enforcements in a constant complex Jacobian power flow model

    Energy Technology Data Exchange (ETDEWEB)

    Raju, V.B.; Bijwe, P.R.; Nanda, J. (Dept. of Electrical Engineering, Indian Inst. of Technology, Delhi, New Delhi 110 016 (IN))

    1990-01-01

    This paper presents a simple and efficient compensation technique to deal with but-type switchings associated with Q-limit enforcement at voltage controlled (PV) buses in a constant Jacobian power flow model. The Jacobian is expressed in the complex variable form resulting in reduced storage requirements as compared to real form of representation of the Jacobian. The structure of the Jacobian is preserved irrespective of bus-type switchings while Q-limit enforcements are performed at the PV buses. This feature permits implementation of optimal ordering of buses in an efficient way while factorizing the Jacobian matrix. The Jacobian is held constant throughout the load flow solution process. Incremental secondary injections (ISIs) are provided at the respective PV buses to maintain the specified voltages. The required injections are computed from the proposed compensation model. Results indicate that the proposed technique is quite efficient as the number of iterations for solution to converge, irrespective of bus-type switchings remains same as that in unadjusted solution case.

  17. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae) habitat and population densities.

    Science.gov (United States)

    Al-Kindi, Khalifa M; Kwan, Paul; R Andrew, Nigel; Welch, Mitchell

    2017-01-01

    In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  18. Dynamical Models for NGC 6503 using a Markov Chain Monte Carlo Technique

    CERN Document Server

    Puglielli, David; Courteau, Stéphane

    2010-01-01

    We use Bayesian statistics and Markov chain Monte Carlo (MCMC) techniques to construct dynamical models for the spiral galaxy NGC 6503. The constraints include surface brightness profiles which display a Freeman Type II structure; HI and ionized gas rotation curves; the stellar rotation, which is nearly coincident with the ionized gas curve; and the line of sight stellar dispersion, with a sigma-drop at the centre. The galaxy models consist of a Sersic bulge, an exponential disc with an optional inner truncation and a cosmologically motivated dark halo. The Bayesian/MCMC technique yields the joint posterior probability distribution function for the input parameters. We examine several interpretations of the data: the Type II surface brightness profile may be due to dust extinction, to an inner truncated disc or to a ring of bright stars; and we test separate fits to the gas and stellar rotation curves to determine if the gas traces the gravitational potential. We test each of these scenarios for bar stability...

  19. Classification of gamma-ray burst durations using robust model-comparison techniques

    Science.gov (United States)

    Kulkarni, Soham; Desai, Shantanu

    2017-04-01

    Gamma-Ray Bursts (GRBs) have been conventionally bifurcated into two distinct categories dubbed "short" and "long", depending on whether their durations are less than or greater than two seconds respectively. However, many authors have pointed to the existence of a third class of GRBs with mean durations intermediate between the short and long GRBs. Here, we apply multiple model comparison techniques to verify these claims. For each category, we obtain the best-fit parameters by maximizing a likelihood function based on a weighted superposition of two (or three) lognormal distributions. We then do model-comparison between each of these hypotheses by comparing the chi-square probabilities, Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). We uniformly apply these techniques to GRBs from Swift (both observer and intrinsic frame), BATSE, BeppoSAX, and Fermi-GBM. We find that the Swift GRB distributions (in the observer frame) for the entire dataset favor three categories at about 2.4σ from difference in chi-squares, and show decisive evidence in favor of three components using both AIC and BIC. However, when the same analysis is done for the subset of Swift GRBs with measured redshifts, two components are favored with marginal significance. For all the other datasets, evidence for three components is either very marginal or disfavored.

  20. Analytical model for Transient Current Technique (TCT) signal prediction and analysis for thin interface characterization

    Science.gov (United States)

    Bronuzzi, J.; Mapelli, A.; Sallese, J. M.

    2016-12-01

    A silicon wafer bonding technique has been recently proposed for the fabrication of monolithic silicon radiation detectors. This new process would enable direct bonding of a read-out electronic chip wafer on a highly resistive silicon substrate wafer. Therefore, monolithic silicon detectors could be fabricated in this way which would allow the free choice of electronic chips and high resistive silicon bulk, even from different providers. Moreover, a monolithic detector with a high resistive bulk would also be available. Electrical properties of the bonded interface are then critical for this application. Indeed, mobile charges generated by radiation inside the bonded bulk are expected to transit through the interface to be collected by the read-out electronics. In order to characterize this interface, the concept of Transient Current Technique (TCT) has been explored by means of numerical simulations combined with a physics based analytical model. In this work, the analytical model giving insight into the physics behind the TCT dependence upon interface traps is validated using both TCAD simulations and experimental measurements.

  1. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  2. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  3. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    Science.gov (United States)

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  4. DEVELOPMENT OF RESERVOIR CHARACTERIZATION TECHNIQUES AND PRODUCTION MODELS FOR EXPLOITING NATURALLY FRACTURED RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Michael L. Wiggins; Raymon L. Brown; Faruk Civan; Richard G. Hughes

    2002-12-31

    For many years, geoscientists and engineers have undertaken research to characterize naturally fractured reservoirs. Geoscientists have focused on understanding the process of fracturing and the subsequent measurement and description of fracture characteristics. Engineers have concentrated on the fluid flow behavior in the fracture-porous media system and the development of models to predict the hydrocarbon production from these complex systems. This research attempts to integrate these two complementary views to develop a quantitative reservoir characterization methodology and flow performance model for naturally fractured reservoirs. The research has focused on estimating naturally fractured reservoir properties from seismic data, predicting fracture characteristics from well logs, and developing a naturally fractured reservoir simulator. It is important to develop techniques that can be applied to estimate the important parameters in predicting the performance of naturally fractured reservoirs. This project proposes a method to relate seismic properties to the elastic compliance and permeability of the reservoir based upon a sugar cube model. In addition, methods are presented to use conventional well logs to estimate localized fracture information for reservoir characterization purposes. The ability to estimate fracture information from conventional well logs is very important in older wells where data are often limited. Finally, a desktop naturally fractured reservoir simulator has been developed for the purpose of predicting the performance of these complex reservoirs. The simulator incorporates vertical and horizontal wellbore models, methods to handle matrix to fracture fluid transfer, and fracture permeability tensors. This research project has developed methods to characterize and study the performance of naturally fractured reservoirs that integrate geoscience and engineering data. This is an important step in developing exploitation strategies for

  5. Very high resolution surface mass balance over Greenland modeled by the regional climate model MAR with a downscaling technique

    Science.gov (United States)

    Kittel, Christoph; Lang, Charlotte; Agosta, Cécile; Prignon, Maxime; Fettweis, Xavier; Erpicum, Michel

    2016-04-01

    This study presents surface mass balance (SMB) results at 5 km resolution with the regional climate MAR model over the Greenland ice sheet. Here, we use the last MAR version (v3.6) where the land-ice module (SISVAT) using a high resolution grid (5km) for surface variables is fully coupled while the MAR atmospheric module running at a lower resolution of 10km. This online downscaling technique enables to correct near-surface temperature and humidity from MAR by a gradient based on elevation before forcing SISVAT. The 10 km precipitation is not corrected. Corrections are stronger over the ablation zone where topography presents more variations. The model has been force by ERA-Interim between 1979 and 2014. We will show the advantages of using an online SMB downscaling technique in respect to an offline downscaling extrapolation based on local SMB vertical gradients. Results at 5 km show a better agreement with the PROMICE surface mass balance data base than the extrapolated 10 km MAR SMB results.

  6. Fuzzy Time Series Forecasting Model Based on Automatic Clustering Techniques and Generalized Fuzzy Logical Relationship

    Directory of Open Access Journals (Sweden)

    Wangren Qiu

    2015-01-01

    Full Text Available In view of techniques for constructing high-order fuzzy time series models, there are three types which are based on advanced algorithms, computational method, and grouping the fuzzy logical relationships. The last type of models is easy to be understood by the decision maker who does not know anything about fuzzy set theory or advanced algorithms. To deal with forecasting problems, this paper presented novel high-order fuzz time series models denoted as GTS (M, N based on generalized fuzzy logical relationships and automatic clustering. This paper issued the concept of generalized fuzzy logical relationship and an operation for combining the generalized relationships. Then, the procedure of the proposed model was implemented on forecasting enrollment data at the University of Alabama. To show the considerable outperforming results, the proposed approach was also applied to forecasting the Shanghai Stock Exchange Composite Index. Finally, the effects of parameters M and N, the number of order, and concerned principal fuzzy logical relationships, on the forecasting results were also discussed.

  7. Tide-surge adjoint modeling: A new technique to understand forecast uncertainty

    Science.gov (United States)

    Wilson, Chris; Horsburgh, Kevin J.; Williams, Jane; Flowerdew, Jonathan; Zanna, Laure

    2013-10-01

    For a simple dynamical system, such as a pendulum, it is easy to deduce where and when applied forcing might produce a particular response. However, for a complex nonlinear dynamical system such as the ocean or atmosphere, this is not as obvious. Knowing when or where the system is most sensitive, to observational uncertainty or otherwise, is key to understanding the physical processes, improving and providing reliable forecasts. We describe the application of adjoint modeling to determine the sensitivity of sea level at a UK coastal location, Sheerness, to perturbations in wind stress preceding an extreme North Sea storm surge event on 9 November 2007. Sea level at Sheerness is one of the most important factors used to decide whether to close the Thames Flood Barrier, which protects London. Adjoint modeling has been used by meteorologists since the 1990s, but is a relatively new technique for ocean modeling. It may be used to determine system sensitivity beyond the scope of ensemble modeling and in a computationally efficient way. Using estimates of wind stress error from Met Office forecasts, we find that for this event total sea level at Sheerness is most sensitive in the 3 h preceding the time of its unperturbed maximum level and over a radius of approximately 300 km. We also find that the pattern of sensitivity follows a simple sequence when considered in the reverse-time direction.

  8. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  9. Model-based sub-Nyquist sampling and reconstruction technique for ultra-wideband (UWB) radar

    Science.gov (United States)

    Nguyen, Lam; Tran, Trac D.

    2010-04-01

    The Army Research Lab has recently developed an ultra-wideband (UWB) synthetic aperture radar (SAR). The radar has been employed to support proof-of-concept demonstration for several concealed target detection programs. The radar transmits and receives short impulses to achieve a wide-bandwidth from 300 MHz to 3000 MHz. Since the radar directly digitizes the wide-bandwidth receive signals, the challenges is to how to employ relatively slow and inexpensive analog-to-digital (A/D) converters to sample the signals with a rate that is greater than the minimum Nyquist rate. ARL has developed a sampling technique that allows us to employ inexpensive A/D converters (ADC) to digitize the widebandwidth signals. However, this technique still has a major drawback due to the longer time required to complete a data acquisition cycle. This in turn translates to lower average power and lower effective pulse repetition frequency (PRF). Compressed Sensing (CS) theory offers a new approach in data acquisition. From the CS framework, we can reconstruct certain signals or images from much fewer samples than the traditional sampling methods, provided that the signals are sparse in certain domains. However, while the CS framework offers the data compression feature, it still does not address the above mentioned drawback, that is the data acquisition must be operated in equivalent time since many global measurements (obtained from global random projections) are required as depicted by the sensing matrix Φ in the CS framework. In this paper, we propose a new technique that allows the sub-Nyquist sampling and the reconstruction of the wide-bandwidth data. In this technique, each wide-bandwidth radar data record is modeled as a superposition of many backscatter signals from reflective point targets. The technique is based on direct sparse recovery using a special dictionary containing many time-delayed versions of the transmitted probing signal. We demonstrate via simulated as well as

  10. Mapping Tamarix: New techniques for field measurements, spatial modeling and remote sensing

    Science.gov (United States)

    Evangelista, Paul H.

    Native riparian ecosystems throughout the southwestern United States are being altered by the rapid invasion of Tamarix species, commonly known as tamarisk. The effects that tamarisk has on ecosystem processes have been poorly quantified largely due to inadequate survey methods. I tested new approaches for field measurements, spatial models and remote sensing to improve our ability measure and to map tamarisk occurrence, and provide new methods that will assist in management and control efforts. Examining allometric relationships between basal cover and height measurements collected in the field, I was able to produce several models to accurately estimate aboveground biomass. The best two models were explained 97% of the variance (R 2 = 0.97). Next, I tested five commonly used predictive spatial models to identify which methods performed best for tamarisk using different types of data collected in the field. Most spatial models performed well for tamarisk, with logistic regression performing best with an Area Under the receiver-operating characteristic Curve (AUC) of 0.89 and overall accuracy of 85%. The results of this study also suggested that models may not perform equally with different invasive species, and that results may be influenced by species traits and their interaction with environmental factors. Lastly, I tested several approaches to improve the ability to remotely sense tamarisk occurrence. Using Landsat7 ETM+ satellite scenes and derived vegetation indices for six different months of the growing season, I examined their ability to detect tamarisk individually (single-scene analyses) and collectively (time-series). My results showed that time-series analyses were best suited to distinguish tamarisk from other vegetation and landscape features (AUC = 0.96, overall accuracy = 90%). June, August and September were the best months to detect unique phenological attributes that are likely related to the species' extended growing season and green-up during

  11. Automated 3D modelling of buildings from aerial and space imagery using image understanding techniques

    Science.gov (United States)

    Kim, Taejung

    The development of a fully automated mapping system is one of the fundamental goals in photogrammetry and remote sensing. As an approach towards this goal, this thesis describes the work carried out in the automated 3D modelling of buildings in urban scenes. The whole work is divided into three parts: the development of an automated height extraction system, the development of an automated building detection system, and the combination of these two systems. After an analysis of the key problems of urban-area imagery for stereo matching, buildings were found to create isolated regions and blunders. From these findings, an automated building height extraction system was developed. This stereoscopic system is based on a pyramidal (area-based) matching algorithm with automatic seed points and a tile-based control strategy. To remove possible blunders and extract buildings from other background objects, a series of "smart" operations using linear elements from buildings were also applied. A new monoscopic building detection system was developed based on a graph constructed from extracted lines and their relations. After extracting lines from a single image using low-level image processing techniques, line relations are searched for and a graph constructed. By finding closed loops in the graph, building hypotheses are generated. These are then merged and verified using shadow analysis and perspective geometry. After verification, each building hypothesis indicates either a building or a part of a building. By combining results from these two systems, 3D building roofs can be modelled automatically. The modelling is performed using height information obtained from the height extraction system and interpolation boundaries obtained from the building detection system. Other fusion techniques and the potential improvements due to these are also discussed. Quantitative analysis was performed for each algorithm presented in this thesis and the results support the newly

  12. High-resolution global tomography: A full-wave technique for forward and inverse modeling

    Science.gov (United States)

    Nissen-Meyer, Tarje; Sigloch, Karin; Fournier, Alexandre

    2010-05-01

    In recent years, seismology has greatly benefitted from significant progress in digital data collection and processing, accurate numerical methods for wave propagation, and high-performance computing to explore crucial scales of interest in both data and model spaces. We will present a full-wave technique to address the seismic forward and inverse problem at the global scale, with a specific focus on diffracted waves in the lowermost mantle: Our 2D spectral-element method tackles 3D wave propagation through spherically symmetric background models down to seismic frequencies of 1 Hz and delivers the wavefields necessary to construct sensitivity kernels. This specific approach distinguishes itself from the adjoint method in that it requires no knowledge about data structure or observables at the time of forward modeling by means of storing entire reference space-time wavefields. To obtain a direct view of the interconnection between surface displacements and earth structure, we examine the time-dependent sensitivity of the seismic signal to 3D model perturbations. Being highly sensitive to such parameters as epicentral distance, earthquake radiation pattern, depth, frequency, receiver components and time windows, this effort suggests criteria for data selection to optimally illuminate a specific region within the earth. As shown with core-diffracted P-waves, we measure and model our observables (e.g. traveltimes, amplitudes) in multiple-frequency passbands, thereby increasing robustness of the inverse problem and path coverage. This allows us to selectively draw only upon frequency bands with high signal-to-noise ratio. We discuss the selection and usability of data for such a Pdiff tomographic setting, coverage maps and target regions. We also touch upon the validity of a 1D reference model and quantify the applicability range of the first-order Born approximation.

  13. Numerical model calibration with the use of an observed sediment mobility mapping technique.

    Science.gov (United States)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2017-04-01

    2 mm) and ii) a novel time-lapse imagery technique used to identify areas of incipient motion. Using the numerical model Delft3D Flow, the experiments were simulated and observed incipient motion and modeled shear stress were compared to evaluate the model's ability to accurately predict sediment transport. Observed and model results were evaluated and compared, which identified a motion threshold and the ability to evaluate the model's performance. To quantify model performance, the ratios of correctly predicted areas divided by total area were calculated and produced a 75% inundation accuracy with a 71% incipient motion accuracy. Inundation accuracies are comparable to reported field studies of braided rivers with highly accurate topographic acquisition. Nevertheless, 75% inundation accuracy is less than ideal, and likely suffers from the complicated topography, shallow water depth (average 1 cm), and the corresponding model's inaccuracies that could derive from even subtle 2 mm elevation errors. As shear stress calculations are dependent upon inundation and depth, the sediment transport accuracies likely suffer from the same issues. Regardless, the sediment transport accuracies are very comparable to inundation accuracies, which is an encouraging result. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917

  14. Spatial Modeling Techniques for Characterizing Geomaterials: Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses%Spatial Modeling Techniques for Characterizing Geomaterials:Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses

    Institute of Scientific and Technical Information of China (English)

    Katsuaki Koike

    2011-01-01

    Sample data in the Earth and environmental sciences are limited in quantity and sampling location and therefore, sophisticated spatial modeling techniques are indispensable for accurate imaging of complicated structures and properties of geomaterials. This paper presents several effective methods that are grouped into two categories depending on the nature of regionalized data used. Type I data originate from plural populations and type II data satisfy the prerequisite of stationarity and have distinct spatial correlations. For the type I data, three methods are shown to be effective and demonstrated to produce plausible results: (1) a spline-based method, (2) a combination of a spline-based method with a stochastic simulation, and (3) a neural network method. Geostatistics proves to be a powerful tool for type II data. Three new approaches of geostatistics are presented with case studies: an application to directional data such as fracture, multi-scale modeling that incorporates a scaling law,and space-time joint analysis for multivariate data. Methods for improving the contribution of such spatial modeling to Earth and environmental sciences are also discussed and future important problems to be solved are summarized.

  15. Modeling seismic wave propagation across the European plate: structural models and numerical techniques, state-of-the-art and prospects

    Science.gov (United States)

    Morelli, Andrea; Danecek, Peter; Molinari, Irene; Postpischl, Luca; Schivardi, Renata; Serretti, Paola; Tondi, Maria Rosaria

    2010-05-01

    beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We validate this new model through comparison of recorded seismograms with simulations based on numerical codes (SPECFEM3D). To ease and increase model usage, we also propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages, and provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV. In the next decade seismologists will be able to reap new possibilities offered by exciting progress in general computing power and algorithmic development in computational seismology. Structural models, still based on classical approaches and modeling just few parameters in each seismogram, will benefit from emerging techniques - such as full waveform fitting and fully nonlinear inversion - that are now just showing their potential. This will require extensive availability of supercomputing resources to earth scientists in Europe, as a tool to match the planned new massive data flow. We need to make sure that the whole apparatus, needed to fully exploit new data, will be widely accessible. To maximize the development, so as for instance to enable us to promptly model ground shaking after a major earthquake, we will also need a better coordination framework, that will enable us to share and amalgamate the abundant local information on earth structure - most often available but difficult to retrieve, merge and use. Comprehensive knowledge of earth structure and of best practices to model wave propagation can by all means be considered an enabling technology for further geophysical progress.

  16. Application of radiosurgical techniques to produce a primate model of brain lesions

    Directory of Open Access Journals (Sweden)

    Jun eKunimatsu

    2015-04-01

    Full Text Available Behavioral analysis of subjects with discrete brain lesions provides important information about the mechanisms of various brain functions. However, it is generally difficult to experimentally produce discrete lesions in deep brain structures. Here we show that a radiosurgical technique, which is used as an alternative treatment for brain tumors and vascular malformations, is applicable to create non-invasive lesions in experimental animals for the research in systems neuroscience. We delivered highly focused radiation (130–150 Gy at ISO center to the frontal eye field of macaque monkeys using a clinical linear accelerator (LINAC. The effects of irradiation were assessed by analyzing oculomotor performance along with magnetic resonance (MR images before and up to 8 months following irradiation. In parallel with tissue edema indicated by MR images, deficits in saccadic and smooth pursuit eye movements were observed during several days following irradiation. Although initial signs of oculomotor deficits disappeared within a month, damage to the tissue and impaired eye movements gradually developed during the course of the subsequent 6 months. Postmortem histological examinations showed necrosis and hemorrhages within a large area of the white matter and, to a lesser extent, in the adjacent gray matter, which was centered at the irradiated target. These results indicated that the LINAC system was useful for making brain lesions in experimental animals, while the suitable radiation parameters to generate more focused lesions need to be further explored. We propose the use of a radiosurgical technique for establishing animal models of brain lesions, and discuss the possible uses of this technique for functional neurosurgical treatments in humans.

  17. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    Science.gov (United States)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  18. Alternative decision modelling techniques for the evaluation of health care technologies: Markov processes versus discrete event simulation.

    Science.gov (United States)

    Karnon, Jonathan

    2003-10-01

    Markov models have traditionally been used to evaluate the cost-effectiveness of competing health care technologies that require the description of patient pathways over extended time horizons. Discrete event simulation (DES) is a more flexible, but more complicated decision modelling technique, that can also be used to model extended time horizons. Through the application of a Markov process and a DES model to an economic evaluation comparing alternative adjuvant therapies for early breast cancer, this paper compares the respective processes and outputs of these alternative modelling techniques. DES displays increased flexibility in two broad areas, though the outputs from the two modelling techniques were similar. These results indicate that the use of DES may be beneficial only when the available data demonstrates particular characteristics.

  19. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    Science.gov (United States)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  20. An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models

    Science.gov (United States)

    Towner, Robert L.; Band, Jonathan L.

    2012-01-01

    An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.

  1. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  2. The orthotopic left lung transplantation in rats: a valuable experimental model without using cuff technique.

    Science.gov (United States)

    Zhang, Qing-chun; Wang, Dian-jun; Yin, Ni; Yin, Bang-liang; Fang, Rui-xin; Xiao, Xue-jun; Wu, Yue-Heng

    2008-11-01

    Advances in the field of clinical lung transplantation must rely on observations made in animal models. In this study, we introduced a new procedure in the rat, orthotopic left lung transplantation without using the cuff technique, in which the donor pulmonary artery, pulmonary vein, and membranous parts of the bronchus were anastomosed continuously in the lumen using a mattress suture under a surgical microscope; meanwhile, a second, low-pressure perfusion through the pulmonary artery and turnover of the vascular stump were made, which also made the vessel anastomosis easy. Transplantations were completed in 68 rats (89.5%), the mean time used for suturing the left lung hilar structure was 23.5 +/- 4.6 min. All lung grafts had good life-sustaining function because of there being no cuff-induced granulation tissue in bronchial anastomotic stoma, and three out of 12 allografts were observed with active bronchiolitis obliterans lesions at 8 weeks after transplantation. This model is a simple, valuable experimental model for studying lung transplantation and new therapies for preventing acute or chronic rejection.

  3. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    Energy Technology Data Exchange (ETDEWEB)

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; Martinez-Inesta, Maria

    2017-04-13

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.

  4. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection

    Directory of Open Access Journals (Sweden)

    Declan T. Delaney

    2016-12-01

    Full Text Available No single network solution for Internet of Things (IoT networks can provide the required level of Quality of Service (QoS for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  5. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

    Directory of Open Access Journals (Sweden)

    Shutang Zhu

    2008-01-01

    Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

  6. Model-independent limits and constraints on extended theories of gravity from cosmic reconstruction techniques

    CERN Document Server

    de la Cruz-Dombriz, Álvaro; Luongo, Orlando; Reverberi, Lorenzo

    2016-01-01

    The onset of dark energy domination depends on the particular gravitational theory driving the cosmic evolution. Model independent techniques are crucial to test both the present $\\Lambda$CDM cosmological paradigm and alternative theories, making the least possible number of assumptions about the Universe. In this paper we investigate whether cosmography is able to distinguish between different gravitational theories, by determining bounds on model parameters for three different extensions of General Relativity, i.e. $k-$essence, $F(T)$ and $f(R)$ theories. We expand each class of theories in powers of redshift $z$ around the present time, making no additional assumptions. This procedure is an extension of previous work and can be seen as the most general approach for testing extended theories of gravity with cosmography. In the case of $F(T)$ and $f(R)$ theories, we show that some assumptions on model parameters often made in previous works are superfluous or unjustified. We use data from the Union2.1 SN cat...

  7. A new model to predict weak-lensing peak counts III. Filtering technique comparisons

    CERN Document Server

    Lin, Chieh-An; Pires, Sandrine

    2016-01-01

    This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological information extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density $\\Omega_\\mathrm{m}$, the power spectrum normalization $\\sigma_8$, and the dark-energy parameter $w_0^\\mathrm{de}$. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the conto...

  8. Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique

    Directory of Open Access Journals (Sweden)

    Jahedul Islam Chowdhury

    2015-12-01

    Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.

  9. Understanding Methane Emission from Natural Gas Activities Using Inverse Modeling Techniques

    Science.gov (United States)

    Abdioskouei, M.; Carmichael, G. R.

    2015-12-01

    Natural gas (NG) has been promoted as a bridge fuel that can smooth the transition from fossil fuels to zero carbon energy sources by having lower carbon dioxide emission and lower global warming impacts in comparison to other fossil fuels. However, the uncertainty around the estimations of methane emissions from NG systems can lead to underestimation of climate and environmental impacts of using NG as a replacement for coal. Accurate estimates of methane emissions from NG operations is crucial for evaluation of environmental impacts of NG extraction and at larger scale, adoption of NG as transitional fuel. However there is a great inconsistency within the current estimates. Forward simulation of methane from oil and gas operation sites for the US is carried out based on NEI-2011 using the WRF-Chem model. Simulated values are compared against measurements of observations from different platforms such as airborne (FRAPPÉ field campaign) and ground-based measurements (NOAA Earth System Research Laboratory). A novel inverse modeling technique is used in this work to improve the model fit to the observation values and to constrain methane emission from oil and gas extraction sites.

  10. A Study on Project Planning Using the Deterministic and Probabilistic Models by Network Scheduling Techniques

    Directory of Open Access Journals (Sweden)

    Rama.S

    2017-03-01

    Full Text Available Project planning is the important task in many areas like construction, resource allocation and many. A sequence of activities has to be performed to complete one task. Each activity has its unique processing time and all together to identify the critical activities which affect the completion of the project. In this paper the probabilistic and deterministic models to determine the project completion time and also the critical activities are considered. A case study on building construction project has been performed to demonstrate the application of the above said models. The two project scheduling namely PERT and CPM are used to determine numerically the different types of floating times of each activity and hence determined the critical path which plays an important role in the project completion time. Also a linear programing model has been developed to reduce the project completion time which optimize the resource allocation. To apply these techniques numerically the primary data from a housing project company in a metropolitan city has been taken, the network diagram of the activities involved in the building construction project has been drawn and the results are tabulated.

  11. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  12. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    Science.gov (United States)

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  13. Feature Specific Criminal Mapping using Data Mining Techniques and Generalized Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Uttam Mande

    2012-06-01

    Full Text Available Lot of research is projected to map the criminal with that of crime and it is observed that there is still a huge increase in the crime rate due to the gap between the optimal usage of technologies and investigation. This has given scope for the development of new methodologies in the area of crime investigation using the techniques based on data mining, image processing, forensic, and social mining. In this paper, presents a model using new methodology for mapping the criminal with the crime. This model clusters the criminal data basing on the type crime. When a crime occurs, based on the eye witness specified features, the criminal is mapped. Here we propose a novel methodology that uses Generalized Gaussian Mixture Model to map the features specified by the eyewitness with that of the features of the criminal who have committed the same type of the crime, if the criminal is not mapped, the suspect table is checked and the reports are generated

  14. Satellite Remote Sensing with Artificial Neural Network Modeling Techniques for Water Quality Monitoring

    Science.gov (United States)

    Kuo, Y. C.; Chen, C. F.

    2016-12-01

    The analyzed parameters of the water quality samples in Lake Nicaragua and Lake Managua include basic physical and chemical water quality parameters, nutrients, bacteria and zooplankton index, heavy metals and organic compounds in the sediments etc. 5 parameters are tested to assess lake eutrophication. To associate with satellite data, the analysis is aim to establish a set of mathematical transformations to convert the model spectra of satellite imagery reactions on water quality parameters and further to calculate the concentration of the parameters in both lakes. The sampling period took place during the rainy season. The high cloud-covered satellite imagery did not provide a completed available data for the analysis. Therefore, we used mathematical techniques to remake an image which contains a completed lake areas. Following by using linear equation to build the water quality models, the results suggested that the testing of chlorophyll in the model performance was the most accurate, and then the suspended solids, total phosphorus and total nitrogen. Fecal colon bacilli, of all parameters, has the worst performance in testing accuracy.

  15. Mouse models and techniques for the isolation of the diabetic endothelium.

    Science.gov (United States)

    Darrow, April L; Maresh, J Gregory; Shohet, Ralph V

    2013-01-01

    Understanding the molecular mechanisms underlying diabetic endothelial dysfunction is necessary in order to improve the cardiovascular health of diabetic patients. Previously, we described an in vivo, murine model of insulin resistance induced by feeding a high-fat diet (HFD) whereby the endothelium may be isolated by fluorescence-activated cell sorting (FACS) based on Tie2-GFP expression and cell-surface staining. Here, we apply this model to two new strains of mice, ScN/Tie2-GFP and ApoE(-/-)/Tie2-GFP, and describe their metabolic responses and endothelial isolation. ScN/Tie2-GFP mice, which lack a functional toll-like receptor 4 (TLR4), display lower fasting glucose and insulin levels and improved glucose tolerance compared to Tie2-GFP mice, suggesting that TLR4 deficiency decreases susceptibility to the development of insulin resistance. ApoE(-/-)/Tie2-GFP mice display elevated glucose and cholesterol levels versus Tie2-GFP mice. Endothelial isolation by FACS achieves a pure population of endothelial cells that retain GFP fluorescence and endothelial functions. Transcriptional analysis of the aortic and muscle endothelium isolated from ApoE(-/-)/Tie2-GFP mice reveals a reduced endothelial response to HFD compared to Tie2-GFP mice, perhaps resulting from preexisting endothelial dysfunction in the hypercholesterolemic state. These mouse models and endothelial isolation techniques are valuable for assessing diabetic endothelial dysfunction and vascular responses in vivo.

  16. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection †

    Science.gov (United States)

    Delaney, Declan T.; O’Hare, Gregory M. P.

    2016-01-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929

  17. Rotator cuff repair: a review of surgical techniques, animal models, and new technologies under development.

    Science.gov (United States)

    Deprés-Tremblay, Gabrielle; Chevrier, Anik; Snow, Martyn; Hurtig, Mark B; Rodeo, Scott; Buschmann, Michael D

    2016-12-01

    Rotator cuff tears are the most common musculoskeletal injury occurring in the shoulder. Current surgical repair fails to heal in 20% to 95% of patients, depending on age, size of the tear, smoking, time of repair, tendon quality, muscle quality, healing response, and surgical treatments. These problems are worsened by the limited healing potential of injured tendons attributed to the presence of degenerative changes and relatively poor vascularity of the cuff tendons. Development of new techniques to treat rotator cuff tears requires testing in animal models to assess safety and efficacy before clinical testing. Hence, it is important to evaluate appropriate animal models for rotator cuff research with degeneration of tendons, muscular atrophy, and fatty infiltration similar to humans. This report reviews current clinical treatments and preclinical approaches for rotator cuff tear repair. The review will focus on current clinical surgical treatments, new repair strategies under clinical and preclinical development, and will also describe different animal models available for rotator cuff research. These findings and future directions for rotator cuff tear repair will be discussed. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  18. Optimized design model for sediment-trapped dam by GIS techniques

    Institute of Scientific and Technical Information of China (English)

    LI Fabin; TU Jianjun; HU Kaiheng

    2003-01-01

    The sediment-trapped dam (STD) is one of the major engineering measures in preventing debris flow, while the design of such an STD has been proven as the most complicated systematic engineering. Supported by geographical information system (GIS)method and techniques, this paper presents an optimized STD design model which is built through calculating STD height and reservoir capacity, designing cross section,analyzing load capacity and structural stability. Guided by a principle of taking both the stability of STD as the constraint condition and the optimized design as a nonlinear programming, this model makes it possible for a precise calculation of the STD volume and its reservoir capacity, aiming at getting the minimum ratio between project investment and reservoir capacity. This model preliminarily achieves the optimized designing of important parameters of STD involving in STD volume (project investment), dam stability and reservoir capacity (sediments trapped by STD), and also provides a new solution attempting for the optimized design of project in preventing debris flow.

  19. Modeling techniques related to taxation policies when dealing with negative externalities

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available Recently, in February 2006, the Romanian government has approved the introduction of the tax on vices – on the products heavily affected the health if consumed (cigarettes and alcohol - The tax set up is 10 euro/1.000 cigarettes, meaning 20 eurocents/cigarette pack, and 200 euro/hl alcohol. It is estimated that the measure will lead to significant price increasing and also to budgetary revenue increasing. It is estimated that the price for a cigarette package will increase in average with 37% - 38% - conforming to a study conducted by Phillip Morris International. Following this decision, the paper purpose is to analyze the impact of the introduction of the tax on vices on the social welfare trough a generalized mathematical model, suitable for the economies where negative externalities are present. Particularly, the model is suitable to analyze the tobacco consumption consequences, as this is one of the possible sources for negative externalities in an economy. Before presenting the modeling techniques, the paper contains an introduction part with data from the Romanian tobacco market and with arguments related to taxes on the tobacco consumption.

  20. Aortic valve repair via neo-chordae technique: mechanistic insight through numerical modelling.

    Science.gov (United States)

    Votta, Emiliano; Paroni, Luca; Conti, Carlo A; Pelosi, Alessandra; Mangini, Andrea; D'Alesio, Paolo; Vismara, Riccardo; Antona, Carlo; Redaelli, Alberto

    2012-05-01

    Recently, the neo-chordae technique (NCT) was proposed to stabilize the surgical correction of isolated aortic valve (AV) prolapse. Neo-chordae are inserted into the corrected leaflet to drive its closure by minimal tensions and prevent relapses. In a previous in vitro study we analysed the NCT effects on healthy aortic roots (ARs). Here we extend that analysis via finite element models (FEMs). After successfully replicating the experimental conditions for validation purposes, we modified our AR FEM, obtaining a continent AV with minor isolated prolapse, thus representing a realistic clinical scenario. We then simulated the NCT, and systematically assessed the acute effects of changing neo-chordae length, opening angle, asymmetry and insertion on the aorta. In the baseline configuration the NCT restored physiological AV dynamics and coaptation, without inducing abnormal leaflet stresses. This outcome was notably sensitive only to neo-chordae length, suggesting that the NCT is a potentially easy-to-standardize technique. However, this parameter is crucial: major shortenings (6 mm) prevent coaptation and increase leaflet stresses by 359 kPa, beyond the yield limit. Minor shortenings (2-4 mm) only induce a negligible stress increase and mild leaflet tethering, which however may hamper the long-term surgical outcome.

  1. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    Science.gov (United States)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  2. Modeling and visualization techniques for virtual stenting of aneurysms and stenoses.

    Science.gov (United States)

    Egger, Jan; Grosskopf, Stefan; Nimsky, Christopher; Kapur, Tina; Freisleben, Bernd

    2012-04-01

    In this work, we present modeling and visualization techniques for virtual stenting of aneurysms and stenoses. In particular, contributions to support the computer-aided treatment of artery diseases - artery enlargement (aneurysm) and artery contraction (stenosis) - are made. If an intervention takes place, there are two different treatment alternatives for this kind of artery diseases: open surgery and minimally invasive (endovascular) treatment. Computer-assisted optimization of endovascular treatments is the main focus of our work. In addition to stent simulation techniques, we also present a computer-aided simulation of endoluminal catheters to support the therapy-planning phase. The stent simulation is based on a three-dimensional Active Contour Method and is applicable to both non-bifurcated (I-stents) and bifurcated stents (Y-stents). All methods are introduced in detail and are evaluated with phantom datasets as well as with real patient data from the clinical routine. Additionally, the clinical prototype that is based upon these methods is described.

  3. Techniques and Technology to Revise Content Delivery and Model Critical Thinking in the Neuroscience Classroom.

    Science.gov (United States)

    Illig, Kurt R

    2015-01-01

    Undergraduate neuroscience courses typically involve highly interdisciplinary material, and it is often necessary to use class time to review how principles of chemistry, math and biology apply to neuroscience. Lecturing and Socratic discussion can work well to deliver information to students, but these techniques can lead students to feel more like spectators than participants in a class, and do not actively engage students in the critical analysis and application of experimental evidence. If one goal of undergraduate neuroscience education is to foster critical thinking skills, then the classroom should be a place where students and instructors can work together to develop them. Students learn how to think critically by directly engaging with course material, and by discussing evidence with their peers, but taking classroom time for these activities requires that an instructor find a way to provide course materials outside of class. Using technology as an on-demand provider of course materials can give instructors the freedom to restructure classroom time, allowing students to work together in small groups and to have discussions that foster critical thinking, and allowing the instructor to model these skills. In this paper, I provide a rationale for reducing the use of traditional lectures in favor of more student-centered activities, I present several methods that can be used to deliver course materials outside of class and discuss their use, and I provide a few examples of how these techniques and technologies can help improve learning outcomes.

  4. Pressure Measurement Techniques for Abdominal Hypertension: Conclusions from an Experimental Model

    Directory of Open Access Journals (Sweden)

    Sascha Santosh Chopra

    2015-01-01

    Full Text Available Introduction. Intra-abdominal pressure (IAP measurement is an indispensable tool for the diagnosis of abdominal hypertension. Different techniques have been described in the literature and applied in the clinical setting. Methods. A porcine model was created to simulate an abdominal compartment syndrome ranging from baseline IAP to 30 mmHg. Three different measurement techniques were applied, comprising telemetric piezoresistive probes at two different sites (epigastric and pelvic for direct pressure measurement and intragastric and intravesical probes for indirect measurement. Results. The mean difference between the invasive IAP measurements using telemetric pressure probes and the IVP measurements was −0.58 mmHg. The bias between the invasive IAP measurements and the IGP measurements was 3.8 mmHg. Compared to the realistic results of the intraperitoneal and intravesical measurements, the intragastric data showed a strong tendency towards decreased values. The hydrostatic character of the IAP was eliminated at high-pressure levels. Conclusion. We conclude that intragastric pressure measurement is potentially hazardous and might lead to inaccurately low intra-abdominal pressure values. This may result in missed diagnosis of elevated abdominal pressure or even ACS. The intravesical measurements showed the most accurate values during baseline pressure and both high-pressure plateaus.

  5. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Directory of Open Access Journals (Sweden)

    Cuong D. Tran

    2015-05-01

    Full Text Available It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease.

  6. Utilizing Statistical Semantic Similarity Techniques for Ontology Mapping——with Applications to AEC Standard Models

    Institute of Scientific and Technical Information of China (English)

    Pan Jiayi; Chin-Pang Jack Cheng; Gloria T. Lau; Kincho H. Law

    2008-01-01

    The objective of this paper is to introduce three semi-automated approaches for ontology mapping using relatedness analysis techniques. In the architecture, engineering, and construction (AEC) industry, there exist a number of ontological standards to describe the semantics of building models. Although the standards share similar scopes of interest, the task of comparing and mapping concepts among standards is challenging due to their differences in terminologies and perspectives. Ontology mapping is therefore necessary to achieve information interoperability, which allows two or more information sources to exchange data and to re-use the data for further purposes. The attribute-based approach, corpus-based approach, and name-based approach presented in this paper adopt the statistical relatedness analysis techniques to discover related concepts from heterogeneous ontologies. A pilot study is conducted on IFC and CIS/2 ontologies to evaluate the approaches. Preliminary results show that the attribute-based approach outperforms the other two approaches in terms of precision and F-measure.

  7. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  8. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques.

    Science.gov (United States)

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E

    2013-04-01

    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  9. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  10. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  11. Analysis of arbitrary defects in photonic crystals by use of the source-model technique.

    Science.gov (United States)

    Ludwig, Alon; Leviatan, Yehuda

    2004-07-01

    A novel method derived from the source-model technique is presented to solve the problem of scattering of an electromagnetic plane wave by a two-dimensional photonic crystal slab that contains an arbitrary defect (perturbation). In this method, the electromagnetic fields in the perturbed problem are expressed in terms of the field due to the periodic currents obtained from a solution of the corresponding unperturbed problem plus the field due to yet-to-be-determined correction current sources placed in the vicinity of the perturbation. Appropriate error measures are suggested, and a few representative structures are presented and analyzed to demonstrate the versatility of the proposed method and to provide physical insight into waveguiding and defect coupling mechanisms typical of finite-thickness photonic crystal slabs.

  12. Source-model technique analysis of electromagnetic scattering by surface grooves and slits.

    Science.gov (United States)

    Trotskovsky, Konstantin; Leviatan, Yehuda

    2011-04-01

    A computational tool, based on the source-model technique (SMT), for analysis of electromagnetic wave scattering by surface grooves and slits is presented. The idea is to use a superposition of the solution of the unperturbed problem and local corrections in the groove/slit region (the grooves and slits are treated as perturbations). In this manner, the solution is obtained in a much faster way than solving the original problem. The proposed solution is applied to problems of grooves and slits in otherwise planar or periodic surfaces. Grooves and slits of various shapes, both smooth ones as well as ones with edges, empty or filled with dielectric material, are considered. The obtained results are verified against previously published data.

  13. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    Science.gov (United States)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  14. FORMULATING AN OPTIMAL DRAINAGE MODEL FOR THE CALABAR AREA USING CARTOGRAPHIC TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Innocent A. Ugbong

    2016-01-01

    Full Text Available In order to achieve the task of formulating an optimal drainage model for the Calabar area, the Calabar drainage system was studied using some cartographic techniques to analyze its surface run-off and channel characteristics so as to determine how floods are generated. A morphological analysis was done, using detailed contour maps prepared for the study area. The “Blue line” and “contour crenulations” methods were used to recreate the expected run-off channels or drainage networks under natural non-urbanized conditions. A drainage structure with 6 major basins and 73 sub-basins was discovered. Existing storm drains were constructed without regards to this natural structure and so floods were generated.

  15. In vivo simulated in vitro model of Jasminum sambac (Linn.) using mammalian liver slice technique

    Institute of Scientific and Technical Information of China (English)

    Kalaiselvi M; Narmadha R; Ragavendran P; Arul Raj; Sophia D; Ravi Kumar G; Gomathi D; Uma C; Kalaivani K

    2011-01-01

    Objective:To evaluate the antioxidant status of Jasminum sambac (J. sambac) using mammalian liver slice technique in in vivo simulated in vitro model. Methods: Antioxidant activity of J. sambac was studied against H2O2 induced free radicals in goat liver. Results: Administration of H2O2 showed significant decline in the levels of antioxidant enzymes in liver homogenate. Pretreatment with J. sambac had significant protection in those levels within normal range. Also the plant normalized the lipid peroxidation which evidently showed that the methanolic extract of J. sambac had a potent antilipid peroxidative effect. Conclusions:The present study suggests that J. sambac has a potent antioxidant effect and it can be used to treat various diseases caused by free radicals.

  16. The Run up Tsunami Modeling in Bengkulu using the Spatial Interpolation of Kriging Technique

    Directory of Open Access Journals (Sweden)

    Yulian Fauzi

    2014-12-01

    Full Text Available This research aims to design a tsunami hazard zone with the scenario of tsunami run-up height variation based on land use, slope and distance from the shoreline. The method used in this research is spatial modelling with GIS via Ordinary Kriging interpolation technique. Kriging interpolation method that is the best in this study is shown by Circular Kriging method with good semivariogram and RMSE values which are small compared to other RMSE kriging methods. The results shows that the area affected by the tsunami inundation run-up height, slope and land use. In the run-up to 30 meters, flooded areas are about 3,148.99 hectares or 20.7% of the total area of the city of Bengkulu.

  17. A review of modeling techniques for advanced effects in shape memory alloy behavior

    Science.gov (United States)

    Cisse, Cheikh; Zaki, Wael; Ben Zineb, Tarak

    2016-10-01

    micro, micro-macro and macro scales focusing pseudoelastic and shape memory effects. The paper reviews and discusses various techniques used in the literature for modeling complex behaviors observed in shape memory alloys (SMAs) that go beyond the core pseudoelastic and shape memory effects. These behaviors, which will be collectively referred to herein as ‘secondary effects’, include mismatch between austenite and martensite moduli, martensite reorientation under nonproportional multiaxial loading, slip and transformation-induced plasticity and their influence on martensite transformation, strong thermomechanical coupling and the influence of loading rate, tensile-compressive asymmetry, and the formation of internal loops due to incomplete phase transformation. In addition, because of their importance for practical design considerations, the paper discusses functional and structural fatigue, and fracture mechanics of SMAs.

  18. Suppression of Spiral Waves by Voltage Clamp Techniques in a Conductance-Based Cardiac Tissue Model

    Institute of Scientific and Technical Information of China (English)

    YU Lian-Chun; MA Jun; ZHANG Guo-Yong; CHEN Yong

    2008-01-01

    A new control method is proposed to control the spatio-temporal dynamics in excitable media, which is described by the Morris-Lecar cells model. It is confirmed that successful suppression of spiral waves can be obtained by spatially clamping the membrane voltage of the excitable cells. The low voltage clamping induces breakup of spiral waves and the fragments are soon absorbed by low voltage obstacles, whereas the high voltage clamping generates travel waves that annihilate spiral waves through collision with them. However, each method has its shortcomings. Furthermore, a two-step method that combines both low and high voltage clamp techniques is then presented as a possible way of out this predicament.

  19. Imaging techniques for visualizing and phenotyping congenital heart defects in murine models.

    Science.gov (United States)

    Liu, Xiaoqin; Tobita, Kimimasa; Francis, Richard J B; Lo, Cecilia W

    2013-06-01

    Mouse model is ideal for investigating the genetic and developmental etiology of congenital heart disease. However, cardiovascular phenotyping for the precise diagnosis of structural heart defects in mice remain challenging. With rapid advances in imaging techniques, there are now high throughput phenotyping tools available for the diagnosis of structural heart defects. In this review, we discuss the efficacy of four different imaging modalities for congenital heart disease diagnosis in fetal/neonatal mice, including noninvasive fetal echocardiography, micro-computed tomography (micro-CT), micro-magnetic resonance imaging (micro-MRI), and episcopic fluorescence image capture (EFIC) histopathology. The experience we have gained in the use of these imaging modalities in a large-scale mouse mutagenesis screen have validated their efficacy for congenital heart defect diagnosis in the tiny hearts of fetal and newborn mice. These cutting edge phenotyping tools will be invaluable for furthering our understanding of the developmental etiology of congenital heart disease.

  20. Dynamic P-Technique for Modeling Patterns of Data: Applications to Pediatric Psychology Research

    Science.gov (United States)

    Aylward, Brandon S.; Rausch, Joseph R.

    2011-01-01

    Objective Dynamic p-technique (DPT) is a potentially useful statistical method for examining relationships among dynamic constructs in a single individual or small group of individuals over time. The purpose of this article is to offer a nontechnical introduction to DPT. Method An overview of DPT analysis, with an emphasis on potential applications to pediatric psychology research, is provided. To illustrate how DPT might be applied, an example using simulated data is presented for daily pain and negative mood ratings. Results The simulated example demonstrates the application of DPT to a relevant pediatric psychology research area. In addition, the potential application of DPT to the longitudinal study of adherence is presented. Conclusion Although it has not been utilized frequently within pediatric psychology, DPT could be particularly well-suited for research in this field because of its ability to powerfully model repeated observations from very small samples. PMID:21486938

  1. Comparison of statistical clustering techniques for the classification of modelled atmospheric trajectories

    Science.gov (United States)

    Kassomenos, P.; Vardoulakis, S.; Borge, R.; Lumbreras, J.; Papaloukas, C.; Karakitsios, S.

    2010-10-01

    In this study, we used and compared three different statistical clustering methods: an hierarchical, a non-hierarchical (K-means) and an artificial neural network technique (self-organizing maps (SOM)). These classification methods were applied to a 4-year dataset of 5 days kinematic back trajectories of air masses arriving in Athens, Greece at 12.00 UTC, in three different heights, above the ground. The atmospheric back trajectories were simulated with the HYSPLIT Vesion 4.7 model of National Oceanic and Atmospheric Administration (NOAA). The meteorological data used for the computation of trajectories were obtained from NOAA reanalysis database. A comparison of the three statistical clustering methods through statistical indices was attempted. It was found that all three statistical methods seem to depend to the arrival height of the trajectories, but the degree of dependence differs substantially. Hierarchical clustering showed the highest level of dependence for fast-moving trajectories to the arrival height, followed by SOM. K-means was found to be the least depended clustering technique on the arrival height. The air quality management applications of these results in relation to PM10 concentrations recorded in Athens, Greece, were also discussed. Differences of PM10 concentrations, during certain clusters, were found statistically different (at 95% confidence level) indicating that these clusters appear to be associated with long-range transportation of particulates. This study can improve the interpretation of modelled atmospheric trajectories, leading to a more reliable analysis of synoptic weather circulation patterns and their impacts on urban air quality.

  2. Lattice Boltzmann flow simulations with applications of reduced order modeling techniques

    KAUST Repository

    Brown, Donald

    2014-01-01

    With the recent interest in shale gas, an understanding of the flow mechanisms at the pore scale and beyond is necessary, which has attracted a lot of interest from both industry and academia. One of the suggested algorithms to help understand flow in such reservoirs is the Lattice Boltzmann Method (LBM). The primary advantage of LBM is its ability to approximate complicated geometries with simple algorithmic modificatoins. In this work, we use LBM to simulate the flow in a porous medium. More specifically, we use LBM to simulate a Brinkman type flow. The Brinkman law allows us to integrate fast free-flow and slow-flow porous regions. However, due to the many scales involved and complex heterogeneities of the rock microstructure, the simulation times can be long, even with the speed advantage of using an explicit time stepping method. The problem is two-fold, the computational grid must be able to resolve all scales and the calculation requires a steady state solution implying a large number of timesteps. To help reduce the computational complexity and total simulation times, we use model reduction techniques to reduce the dimension of the system. In this approach, we are able to describe the dynamics of the flow by using a lower dimensional subspace. In this work, we utilize the Proper Orthogonal Decomposition (POD) technique, to compute the dominant modes of the flow and project the solution onto them (a lower dimensional subspace) to arrive at an approximation of the full system at a lowered computational cost. We present a few proof-of-concept examples of the flow field and the corresponding reduced model flow field.

  3. Comparison of Artificial Neural Network And M5 Model Tree Technique In Water Level Forecasting of Solo River

    Science.gov (United States)

    Lasminto, Umboro; Hery Mularta, Listya

    2010-05-01

    Flood events along the Solo River flow at the end of December 2007 has caused lose of properties and lives. Floods occurred in the city of Ngawi, Madiun, Bojonegoro, Babat and surrounding areas. To reduce future losses, one of the important efforts that will occur during a flood is to get information about the magnitude and time will be floods, so that people can make an effort to reduce its impact. Flood forecasting model can provide information of water level in the river some time before the incident. This paper will compare the flood forecasting model at Bojonegoro City was built using the technique of Artificial Neural Network (ANN) and M5 Model Tree (M5MT). The model will forecast the water level of 1, 3 and 6 hours ahead at the point of water level recorders in the City of Bojonegoro using input from the water level at some point water level recorders in the upstream such as Karangnongko, Sekayu, Jurug and Wonogiri. The same data set of hourly water level records are used to build the model of ANN and M5MT technique. The selection of parameters and setup of ANN and M5MT technique is done to obtain the best result. The results of the model are evaluated by calculating the Root Mean Square Error (RMSE) between the predictions and observations. RMSE produced by the water level forecasting model 1, 3 and 6 hours ahead with M5MT technique are 0.2723, 0.6279 and 0.7176 meters. While the ANN technique are 0.1829, 0.3192 and 0517 meters. ANN technique has a better ability in predicting low flow, whereas M5 Model Tree technique has a better ability in predicting high flow. Keywords : Water level forecasting, Solo River, M5 Model Tree, Artificial Neural Network

  4. Exact and Direct Modeling Technique for Rotor-Bearing Systems with Arbitrary Selected Degrees-of-Freedom

    Directory of Open Access Journals (Sweden)

    Shilin Chen

    1994-01-01

    Full Text Available An exact and direct modeling technique is proposed for modeling of rotor-bearing systems with arbitrary selected degrees-of-freedom. This technique is based on the combination of the transfer and dynamic stiffness matrices. The technique differs from the usual combination methods in that the global dynamic stiffness matrix for the system or the subsystem is obtained directly by rearranging the corresponding global transfer matrix. Therefore, the dimension of the global dynamic stiffness matrix is independent of the number of the elements or the substructures. In order to show the simplicity and efficiency of the method, two numerical examples are given.

  5. 3-D thermo-mechanical laboratory modeling of plate-tectonics: modeling scheme, technique and first experiments

    Directory of Open Access Journals (Sweden)

    D. Boutelier

    2011-05-01

    Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.

  6. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    Science.gov (United States)

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  7. Reducing the impact of a desalination plant using stochastic modeling and optimization techniques

    Science.gov (United States)

    Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François

    2009-02-01

    SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer

  8. Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique

    Science.gov (United States)

    Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.

    1997-01-01

    An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.

  9. Application of magnetomechanical hysteresis modeling to magnetic techniques for monitoring neutron embrittlement and biaxial stress

    Energy Technology Data Exchange (ETDEWEB)

    Sablik, M.J.; Kwun, H.; Rollwitz, W.L.; Cadena, D.

    1992-01-01

    The objective is to investigate experimentally and theoretically the effects of neutron embrittlement and biaxial stress on magnetic properties in steels, using various magnetic measurement techniques. Interaction between experiment and modeling should suggest efficient magnetic measurement procedures for determining neutron embrittlement biaxial stress. This should ultimately assist in safety monitoring of nuclear power plants and of gas and oil pipelines. In the first six months of this first year study, magnetic measurements were made on steel surveillance specimens from the Indian Point 2 and D.C. Cook 2 reactors. The specimens previously had been characterized by Charpy tests after specified neutron fluences. Measurements now included: (1) hysteresis loop measurement of coercive force, permeability and remanence, (2) Barkhausen noise amplitude; and (3) higher order nonlinear harmonic analysis of a 1 Hz magnetic excitation. Very good correlation of magnetic parameters with fluence and embrittlement was found for specimens from the Indian Point 2 reactor. The D.C. Cook 2 specimens, however showed poor correlation. Possible contributing factors to this are: (1) metallurgical differences between D.C. Cook 2 and Indian Point 2 specimens; (2) statistical variations in embrittlement parameters for individual samples away from the stated men values; and (3) conversion of the D.C. Cook 2 reactor to a low leakage core configuration in the middle of the period of surveillance. Modeling using a magnetomechanical hysteresis model has begun. The modeling will first focus on why Barkhausen noise and nonlinear harmonic amplitudes appear to be better indicators of embrittlement than the hysteresis loop parameters.

  10. Comparison of impression techniques for a two-implant 15-degree divergent model.

    Science.gov (United States)

    Carr, A B

    1992-01-01

    To consistently provide passively fitting implant superstructures, an understanding of the accuracy and precision of all phases of fabrication and connection is required. The initial phase of fabrication, ie, impression making and cast forming, was investigated in an earlier report for a mandibular five-implant model. The current study evaluates the accuracy of working casts produced from impressions using two different transfer copings in a 15-degree divergent two-implant posterior mandibular model. While the indirect method is less cumbersome to use, it was found to be less accurate in the prior study. The purpose of this study was to see if the direct method is more precise for this clinical situation. A transfer was deemed effective in producing experimental casts if distances between specified points on the cast agreed with the corresponding distances on the master cast. The absolute value of the difference in distances between experimental and master casts was compared for the two techniques (two-sample t tests). No significant differences were noted (P > .05), and the power of the tests ranged from 0.70 to 0.96 against the one-sided hypothesis that the direct method had a smaller mean absolute difference in distance than the indirect method. This suggests no clear advantage in using the direct method in similar clinical situations. Comparison of these findings to other impression accuracy studies is made.

  11. Synthetic aperture radar imaging based on attributed scatter model using sparse recovery techniques

    Institute of Scientific and Technical Information of China (English)

    苏伍各; 王宏强; 阳召成

    2014-01-01

    The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters’ positions among a much large number of potential scatters’ positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed l0 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.

  12. Modeling and simulation of PEM fuel cell's flow channels using CFD techniques

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, Edgar F.; Andrade, Alexandre B.; Robalinho, Eric; Bejarano, Martha L.M.; Linardi, Marcelo [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)]. E-mails: efcunha@ipen.br; abodart@ipen.br; eric@ipen.br; mmora@ipen.br; mlinardi@ipen.br; Cekinski, Efraim [Instituto de Pesquisas Tecnologicas (IPT-SP), Sao Paulo, SP (Brazil)]. E-mail: cekinski@ipt.br

    2007-07-01

    Fuel cells are one of the most important devices to obtain electrical energy from hydrogen. The Proton Exchange Membrane Fuel Cell (PEMFC) consists of two important parts: the Membrane Electrode Assembly (MEA), where the reactions occur, and the flow field plates. The plates have many functions in a fuel cell: distribute reactant gases (hydrogen and air or oxygen), conduct electrical current, remove heat and water from the electrodes and make the cell robust. The cost of the bipolar plates corresponds up to 45% of the total stack costs. The Computational Fluid Dynamic (CFD) is a very useful tool to simulate hydrogen and oxygen gases flow channels, to reduce the costs of bipolar plates production and to optimize mass transport. Two types of flow channels were studied. The first type was a commercial plate by ELECTROCELL and the other was entirely projected at Programa de Celula a Combustivel (IPEN/CNEN-SP) and the experimental data were compared with modelling results. Optimum values for each set of variables were obtained and the models verification was carried out in order to show the feasibility of this technique to improve fuel cell efficiency. (author)

  13. Hydrolysis of model cellulose films by cellulosomes: Extension of quartz crystal microbalance technique to multienzymatic complexes.

    Science.gov (United States)

    Zhou, Shanshan; Li, Hsin-Fen; Garlapalli, Ravinder; Nokes, Sue E; Flythe, Michael; Rankin, Stephen E; Knutson, Barbara L

    2017-01-10

    Bacterial cellulosomes contain highly efficient complexed cellulases and have been studied extensively for the production of lignocellulosic biofuels and bioproducts. A surface measurement technique, quartz crystal microbalance with dissipation (QCM-D), was extended for the investigation of real-time binding and hydrolysis of model cellulose surfaces from free fungal cellulases to the cellulosomes of Clostridium thermocellum (Ruminiclostridium thermocellum). In differentiating the activities of cell-free and cell-bound cellulosomes, greater than 68% of the cellulosomes in the crude cell broth were found to exist unattached to the cell across multiple growth stages. The initial hydrolysis rate of crude cell broth measured by QCM was greater than that of cell-free cellulosomes, but the corresponding frequency drop (a direct measure of the mass of enzyme adsorbed to the film) of crude cell broth was less than that of the cell-free cellulosomes, consistent with the underestimation of the cell mass adsorbed using QCM. Inhibition of hydrolysis by cellobiose (0-10g/L), which is similar for crude cell broth and cell-free cellulosomes, demonstrates the sensitivity of the QCM to environmental perturbations of multienzymatic complexes. QCM measurements using multienzymatic complexes may be used to screen and optimize hydrolysis conditions and to develop mechanistic, surface-based models of enzymatic cellulose deconstruction. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Application of predictive modelling techniques in industry: from food design up to risk assessment.

    Science.gov (United States)

    Membré, Jeanne-Marie; Lambert, Ronald J W

    2008-11-30

    In this communication, examples of applications of predictive microbiology in industrial contexts (i.e. Nestlé and Unilever) are presented which cover a range of applications in food safety from formulation and process design to consumer safety risk assessment. A tailor-made, private expert system, developed to support safe product/process design assessment is introduced as an example of how predictive models can be deployed for use by non-experts. Its use in conjunction with other tools and software available in the public domain is discussed. Specific applications of predictive microbiology techniques are presented relating to investigations of either growth or limits to growth with respect to product formulation or process conditions. An example of a probabilistic exposure assessment model for chilled food application is provided and its potential added value as a food safety management tool in an industrial context is weighed against its disadvantages. The role of predictive microbiology in the suite of tools available to food industry and some of its advantages and constraints are discussed.

  15. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  16. A Modeling Technique and Representation of Failure in the Analysis of Triaxial Braided Carbon Fiber Composites

    Science.gov (United States)

    Littell, Justin D.; Binienda, Wieslaw K.; Goldberg, Robert K.; Roberts, Gary D.

    2008-01-01

    Quasi-static tests have been performed on triaxially braided carbon fiber composite materials with large unit cell sizes. The effects of different fibers and matrix materials on the failure mode were investigated. Simulations of the tests have been performed using the transient dynamic finite element code, LS-DYNA. However, the wide range of failure modes observed for the triaxial braided carbon fiber composites during tests could not be simulated using composite material models currently available within LS-DYNA. A macroscopic approach has been developed that provides better simulation of the material response in these materials. This approach uses full-field optical measurement techniques to measure local failures during quasi-static testing. Information from these experiments is then used along with the current material models available in LS-DYNA to simulate the influence of the braided architecture on the failure process. This method uses two-dimensional shell elements with integration points through the thickness of the elements to represent the different layers of braid along with a new analytical method for the import of material stiffness and failure data directly. The present method is being used to examine the effect of material properties on the failure process. The experimental approaches used to obtain the required data will be described, and preliminary results of the numerical analysis will be presented.

  17. Simulation of Moving Loads in Elastic Multibody Systems With Parametric Model Reduction Techniques

    Directory of Open Access Journals (Sweden)

    Fischer Michael

    2014-08-01

    Full Text Available In elastic multibody systems, one considers large nonlinear rigid body motion and small elastic deformations. In a rising number of applications, e.g. automotive engineering, turning and milling processes, the position of acting forces on the elastic body varies. The necessary model order reduction to enable efficient simulations requires the determination of ansatz functions, which depend on the moving force position. For a large number of possible interaction points, the size of the reduced system would increase drastically in the classical Component Mode Synthesis framework. If many nodes are potentially loaded, or the contact area is not known a-priori and only a small number of nodes is loaded simultaneously, the system is described in this contribution with the parameter-dependent force position. This enables the application of parametric model order reduction methods. Here, two techniques based on matrix interpolation are described which transform individually reduced systems and allow the interpolation of the reduced system matrices to determine reduced systems for any force position. The online-offline decomposition and description of the force distribution onto the reduced elastic body are presented in this contribution. The proposed framework enables the simulation of elastic multibody systems with moving loads efficiently because it solely depends on the size of the reduced system. Results in frequency and time domain for the simulation of a thin-walled cylinder with a moving load illustrate the applicability of the proposed method.

  18. The Double Layer Methodology and the Validation of Eigenbehavior Techniques Applied to Lifestyle Modeling

    Science.gov (United States)

    Lamichhane, Bishal

    2017-01-01

    A novel methodology, the double layer methodology (DLM), for modeling an individual's lifestyle and its relationships with health indicators is presented. The DLM is applied to model behavioral routines emerging from self-reports of daily diet and activities, annotated by 21 healthy subjects over 2 weeks. Unsupervised clustering on the first layer of the DLM separated our population into two groups. Using eigendecomposition techniques on the second layer of the DLM, we could find activity and diet routines, predict behaviors in a portion of the day (with an accuracy of 88% for diet and 66% for activity), determine between day and between individual similarities, and detect individual's belonging to a group based on behavior (with an accuracy up to 64%). We found that clustering based on health indicators was mapped back into activity behaviors, but not into diet behaviors. In addition, we showed the limitations of eigendecomposition for lifestyle applications, in particular when applied to noisy and sparse behavioral data such as dietary information. Finally, we proposed the use of the DLM for supporting adaptive and personalized recommender systems for stimulating behavior change. PMID:28133607

  19. A Centroid Model for the Depth Assessment of Images using Rough Fuzzy Set Techniques

    Directory of Open Access Journals (Sweden)

    P. Swarnalatha

    2012-04-01

    Full Text Available Detection of affected areas in images is a crucial step in assessing the depth of the affected area for municipal operators. These affected areas in the underground images, which are line images are indicative of the condition of buried infrastructures like sewers and water mains. These images identify affected areas and extract their properties like structures from the images, whose contrast has been enhanced... A Centroid Model for the Depth Assessment of Images using Rough Fuzzy Set Techniques presents a three step method which is a simple, robust and efficient one to detect affected areas in the underground concrete images. The proposed methodology is to use segmentation and feature extraction using structural elements. The main objective for using this model is to find the dimensions of the affected areas such as the length, width, depth and the type of the defects/affected areas. Although human eye is extremely effective at recognition and classification, it is not suitable for assessing defects in images, which might have spread over thousands of miles of image lines. The reasons are mainly fatigue, subjectivity and cost. Our objective is to reduce the effort and the labour of a person in detecting the affected areas in underground images. A proposal to apply rough fuzzy set theory to compute the lower and upper approximations of the affected area of the image is made in this paper. In this connection we propose to use some concepts and technology developed by Pal and Maji.

  20. Further development of model techniques to study the mixing in gas flame zones

    Energy Technology Data Exchange (ETDEWEB)

    Vaclavinek, J.; Zethraeus, B.

    1985-12-01

    The most widely used quantitative model techniques are tracer-gas analysis and pH-metry. The former has the disadvantage that it is difficult to visualize the flow pattern simultaneous to the measurement and the latter the disadvantage of long response times due to the probe. Conductivity probes are considerably faster than pH-probes. The problem when using conductimetry is to find an acid-base pair that gives a unique correlation between mixing proportions and conductivity. This paper presents an acid-base pair that answers the demands of uniqueness and high resolution over a wide range of mixing proportions. It also suggests methods to simulate oxygen enrichment and varying fuel gas analysis. The paper also presents a block scheme for a flexible conductometer. The suggested layout of the conductometer and conductimetric probes gives a flexible system with calibration curves as close to straight lines as possible. Experimental results are shown to be in good agreement with data reported in the literature as well as with our hot model experiments. (orig.).