WorldWideScience

Sample records for modeling technique sadmt

  1. Classifying variability modeling techniques

    NARCIS (Netherlands)

    Sinnema, Marco; Deelstra, Sybren

    Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The

  2. Semiconductor Modeling Techniques

    CERN Document Server

    Xavier, Marie

    2012-01-01

    This book describes the key theoretical techniques for semiconductor research to quantitatively calculate and simulate the properties. It presents particular techniques to study novel semiconductor materials, such as 2D heterostructures, quantum wires, quantum dots and nitrogen containing III-V alloys. The book is aimed primarily at newcomers working in the field of semiconductor physics to give guidance in theory and experiment. The theoretical techniques for electronic and optoelectronic devices are explained in detail.

  3. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  4. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  5. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  6. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  7. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  8. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  9. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  10. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  11. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  12. Workshop on Computational Modelling Techniques in Structural ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 22; Issue 6. Workshop on Computational Modelling Techniques in Structural Biology. Information and Announcements Volume 22 Issue 6 June 2017 pp 619-619. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  14. Techniques to develop data for hydrogeochemical models

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, C.M.; Holcombe, L.J.; Gancarz, D.H.; Behl, A.E. (Radian Corp., Austin, TX (USA)); Erickson, J.R.; Star, I.; Waddell, R.K. (Geotrans, Inc., Boulder, CO (USA)); Fruchter, J.S. (Battelle Pacific Northwest Lab., Richland, WA (USA))

    1989-12-01

    The utility industry, through its research and development organization, the Electric Power Research Institute (EPRI), is developing the capability to evaluate potential migration of waste constitutents from utility disposal sites to the environment. These investigations have developed computer programs to predict leaching, transport, attenuation, and fate of inorganic chemicals. To predict solute transport at a site, the computer programs require data concerning the physical and chemical conditions that affect solute transport at the site. This manual provides a comprehensive view of the data requirements for computer programs that predict the fate of dissolved materials in the subsurface environment and describes techniques to measure or estimate these data. In this manual, basic concepts are described first and individual properties and their associated measurement or estimation techniques are described later. The first three sections review hydrologic and geochemical concepts, discuss data requirements for geohydrochemical computer programs, and describe the types of information the programs produce. The remaining sections define and/or describe the properties of interest for geohydrochemical modeling and summarize available technique to measure or estimate values for these properties. A glossary of terms associated with geohydrochemical modeling and an index are provided at the end of this manual. 318 refs., 9 figs., 66 tabs.

  15. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  16. Advances in transgenic animal models and techniques.

    Science.gov (United States)

    Ménoret, Séverine; Tesson, Laurent; Remy, Séverine; Usal, Claire; Ouisse, Laure-Hélène; Brusselle, Lucas; Chenouard, Vanessa; Anegon, Ignacio

    2017-10-01

    On May 11th and 12th 2017 was held in Nantes, France, the international meeting "Advances in transgenic animal models and techniques" ( http://www.trm.univ-nantes.fr/ ). This biennial meeting is the fifth one of its kind to be organized by the Transgenic Rats ImmunoPhenomic (TRIP) Nantes facility ( http://www.tgr.nantes.inserm.fr/ ). The meeting was supported by private companies (SONIDEL, Scionics computer innovation, New England Biolabs, MERCK, genOway, Journal Disease Models and Mechanisms) and by public institutions (International Society for Transgenic Technology, University of Nantes, INSERM UMR 1064, SFR François Bonamy, CNRS, Région Pays de la Loire, Biogenouest, TEFOR infrastructure, ITUN, IHU-CESTI and DHU-Oncogeffe and Labex IGO). Around 100 participants, from France but also from different European countries, Japan and USA, attended the meeting.

  17. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization ...

  18. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Abstract. Most of the existing modelling techniques for the speaker recog- nition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp.

  19. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  20. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in

  1. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  2. Ambient temperature modelling with soft computing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); De Felice, Matteo [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); University of Rome ' ' Roma 3' ' , Dipartimento di Informatica e Automazione (DIA), Via della Vasca Navale 79, 00146 Rome (Italy)

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  3. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  4. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  5. Advanced techniques for modeling avian nest survival

    Science.gov (United States)

    Dinsmore, S.J.; White, Gary C.; Knopf, F.L.

    2002-01-01

    Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models

  6. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  7. Implementation of linguistic models by holographic technique

    Science.gov (United States)

    Pavlov, Alexander V.; Shevchenko, Yanina Y.

    2004-01-01

    In this paper we consider linguistic model as an algebraic model and restrict our consideration to the semantics only. The concept allows "natural-like" language to be used by human-teacher to describe for machine the way of the problem solving, which is based on human"s knowledge and experience. Such imprecision words as "big", "very big", "not very big", etc can be used for human"s knowledge representation. Technically, the problem is to match metric scale, used by the technical device, with the linguistic scale, intuitively formed by the person. We develop an algebraic description of 4-f Fourier-holography setup by using triangular norms based approach. In the model we use the Fourier-duality of the t-norms and t-conorms, which is implemented by 4-f Fourier-holography setup. We demonstrate the setup is described adequately by De-Morgan"s law for involution. Fourier-duality of the t-norms and t-conorms leads to fuzzy-valued logic. We consider General Modus Ponens rule implementation to define the semantical operators, which are adequate to the setup. We consider scales, formed in both +1 and -1 orders of diffraction. We use representation of linguistic labels by fuzzy numbers to form the scale and discuss the dependence of the scale grading on the holographic recording medium operator. To implement reasoning with multi-parametric input variable we use Lorentz function to approximate linguistic labels. We use an example of medical diagnostics for experimental illustration of reasoning on the linguistic scale.

  8. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  9. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  10. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  11. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method for constr......Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...... mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...

  12. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Feature extraction involves extracting speaker-specific features from the speech signal at reduced data rate. The extracted features are further combined using modelling techniques to generate speaker models. The speaker models are then tested using the features extracted from the test speech signal. The improvement in ...

  13. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  14. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  15. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  16. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  17. Application of the numerical modelling techniques to the simulation ...

    African Journals Online (AJOL)

    The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...

  18. Fuzzy Control Technique Applied to Modified Mathematical Model ...

    African Journals Online (AJOL)

    In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...

  19. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  20. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  1. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  2. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  3. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds...

  4. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  5. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  6. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  7. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  8. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  9. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  10. Effects of Peer Modelling Technique in Reducing Substance Abuse ...

    African Journals Online (AJOL)

    The study investigated the effects of peer modelling techniques in reducing substance abuse among undergraduates in Nigeria. The participants were one hundred and twenty (120) undergraduate students in 100 and 400 levels respectively. There are two groups: one treatment group and one control group.

  11. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  12. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  13. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  14. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  15. Advanced techniques in reliability model representation and solution

    Science.gov (United States)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  16. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  17. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  18. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    OpenAIRE

    Frederico R. Romero; Claudemir Trapp; Michael Muntener; Fabio A. Brito; Louis R. Kavoussi; Thomas W. Jarrett

    2007-01-01

    OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbabl...

  19. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  20. Fuzzy techniques for subjective workload-score modeling under uncertainties.

    Science.gov (United States)

    Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina

    2008-12-01

    This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.

  1. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  2. Practical Techniques for Modeling Gas Turbine Engine Performance

    Science.gov (United States)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  3. New techniques and models for assessing ischemic heart disease risks

    Directory of Open Access Journals (Sweden)

    I.N. Yakovina

    2017-09-01

    Full Text Available The paper focuses on tasks of creating and implementing a new technique aimed at assessing ischemic heart diseases risk. The techniques is based on a laboratory-diagnostic complex which includes oxidative, lipid-lipoprotein, inflammatory and metabolic biochemical parameters; s system of logic-mathematic models used for obtaining numeric risk assessments; and a program module which allows to calculate and analyze the results. we justified our models in the course of our re-search which included 172 patients suffering from ischemic heart diseases (IHD combined with coronary atherosclerosis verified by coronary arteriography and 167 patients who didn't have ischemic heart diseases. Our research program in-cluded demographic and social data, questioning on tobacco and alcohol addiction, questioning about dietary habits, chronic diseases case history and medications intake, cardiologic questioning as per Rose, anthropometry, 3-times meas-ured blood pressure, spirometry, and electrocardiogram taking and recording with decoding as per Minnesota code. We detected biochemical parameters of each patient and adjusted our task of creating techniques and models for assessing ischemic heart disease risks on the basis of inflammatory, oxidative, and lipid biological markers. We created a system of logic and mathematic models which is a universal scheme for laboratory parameters processing allowing for dissimilar data specificity. The system of models is universal, but a diagnostic approach to applied biochemical parameters is spe-cific. The created program module (calculator helps a physician to obtain a result on the basis of laboratory research data; the result characterizes numeric risks of coronary atherosclerosis and ischemic heart disease for a patient. It also allows to obtain a visual image of a system of parameters and their deviation from a conditional «standard – pathology» boundary. The complex is implemented into practice by the Scientific

  4. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    Science.gov (United States)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  5. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  6. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  7. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  8. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  9. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    International Nuclear Information System (INIS)

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-01-01

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods

  10. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  11. Modelling galaxy formation with multi-scale techniques

    International Nuclear Information System (INIS)

    Hobbs, A.

    2011-01-01

    Full text: Galaxy formation and evolution depends on a wide variety of physical processes - star formation, gas cooling, supernovae explosions and stellar winds etc. - that span an enormous range of physical scales. We present a novel technique for modelling such massively multiscale systems. This has two key new elements: Lagrangian re simulation, and convergent 'sub-grid' physics. The former allows us to hone in on interesting simulation regions with very high resolution. The latter allows us to increase resolution for the physics that we can resolve, without unresolved physics spoiling convergence. We illustrate the power of our new approach by showing some new results for star formation in the Milky Way. (author)

  12. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...... steady atmospheric wind shear profile with and without wind direction changes up through the atmospheric boundary layer. Results show that the main impact on the turbine is captured by the model. Analysis of the wake behind the wind turbine, reveal the formation of a skewed wake geometry interacting...

  13. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  14. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  15. Advanced applications of numerical modelling techniques for clay extruder design

    Science.gov (United States)

    Kandasamy, Saravanakumar

    Ceramic materials play a vital role in our day to day life. Recent advances in research, manufacture and processing techniques and production methodologies have broadened the scope of ceramic products such as bricks, pipes and tiles, especially in the construction industry. These are mainly manufactured using an extrusion process in auger extruders. During their long history of application in the ceramic industry, most of the design developments of extruder systems have resulted from expensive laboratory-based experimental work and field-based trial and error runs. In spite of these design developments, the auger extruders continue to be energy intensive devices with high operating costs. Limited understanding of the physical process involved in the process and the cost and time requirements of lab-based experiments were found to be the major obstacles in the further development of auger extruders.An attempt has been made herein to use Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) based numerical modelling techniques to reduce the costs and time associated with research into design improvement by experimental trials. These two techniques, although used widely in other engineering applications, have rarely been applied for auger extruder development. This had been due to a number of reasons including technical limitations of CFD tools previously available. Modern CFD and FEA software packages have much enhanced capabilities and allow the modelling of the flow of complex fluids such as clay.This research work presents a methodology in using Herschel-Bulkley's fluid flow based CFD model to simulate and assess the flow of clay-water mixture through the extruder and the die of a vacuum de-airing type clay extrusion unit used in ceramic extrusion. The extruder design and the operating parameters were varied to study their influence on the power consumption and the extrusion pressure. The model results were then validated using results from

  16. Biological modelling of pelvic radiotherapy. Potential gains from conformal techniques

    Energy Technology Data Exchange (ETDEWEB)

    Fenwick, J.D

    1999-07-01

    Models have been developed which describe the dose and volume dependences of various long-term rectal complications of radiotherapy; assumptions underlying the models are consistent with clinical and experimental descriptions of complication pathogenesis. In particular, rectal bleeding - perhaps the most common complication of modern external beam prostate radiotherapy, and which might be viewed as its principle dose-limiting toxicity - has been modelled as a parallel-type complication. Rectal dose-surface-histograms have been calculated for 79 patients treated, in the course of the Royal Marsden trial of pelvic conformal radiotherapy, for prostate cancer using conformal or conventional techniques; rectal bleeding data is also available for these patients. The maximum- likelihood fit of the parallel bleeding model to the dose-surface-histograms and complication data shows that the complication status of the patients analysed (most of whom received reference point doses of 64 Gy) was significantly dependent on, and almost linearly proportional to, the volume of highly dosed rectal wall: a 1% decrease in the fraction of rectal wall (outlined over an 11 cm rectal length) receiving a dose of 58 Gy or more lead to a reduction in the (RTOG) grade 1,2,3 bleeding rate of about 1.1% - 95% confidence interval [0.04%, 2.2%]. The parallel model fit to the bleeding data is only marginally biased by uncertainties in the calculated dose-surface-histograms (due to setup errors, rectal wall movement and absolute rectal surface area variability), causing the gradient of the observed volume-response curve to be slightly lower than that which would be seen in the absence of these uncertainties. An analysis of published complication data supports these single-centre findings and indicates that the reductions in highly dosed rectal wall volumes obtainable using conformal radiotherapy techniques can be exploited to allow escalation of the dose delivered to the prostate target volume, the

  17. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  18. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  19. Modeling and Forecasting Electricity Demand in Azerbaijan Using Cointegration Techniques

    Directory of Open Access Journals (Sweden)

    Fakhri J. Hasanov

    2016-12-01

    Full Text Available Policymakers in developing and transitional economies require sound models to: (i understand the drivers of rapidly growing energy consumption and (ii produce forecasts of future energy demand. This paper attempts to model electricity demand in Azerbaijan and provide future forecast scenarios—as far as we are aware this is the first such attempt for Azerbaijan using a comprehensive modelling framework. Electricity consumption increased and decreased considerably in Azerbaijan from 1995 to 2013 (the period used for the empirical analysis—it increased on average by about 4% per annum from 1995 to 2006 but decreased by about 4½% per annum from 2006 to 2010 and increased thereafter. It is therefore vital that Azerbaijani planners and policymakers understand what drives electricity demand and be able to forecast how it will grow in order to plan for future power production. However, modeling electricity demand for such a country has many challenges. Azerbaijan is rich in energy resources, consequently GDP is heavily influenced by oil prices; hence, real non-oil GDP is employed as the activity driver in this research (unlike almost all previous aggregate energy demand studies. Moreover, electricity prices are administered rather than market driven. Therefore, different cointegration and error correction techniques are employed to estimate a number of per capita electricity demand models for Azerbaijan, which are used to produce forecast scenarios for up to 2025. The resulting estimated models (in terms of coefficients, etc. and forecasts of electricity demand for Azerbaijan in 2025 prove to be very similar; with the Business as Usual forecast ranging from about of 19½ to 21 TWh.

  20. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  1. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  2. Mapping the Complexities of Online Dialogue: An Analytical Modeling Technique

    Directory of Open Access Journals (Sweden)

    Robert Newell

    2014-03-01

    Full Text Available The e-Dialogue platform was developed in 2001 to explore the potential of using the Internet for engaging diverse groups of people and multiple perspectives in substantive dialogue on sustainability. The system is online, text-based, and serves as a transdisciplinary space for bringing together researchers, practitioners, policy-makers and community leaders. The Newell-Dale Conversation Modeling Technique (NDCMT was designed for in-depth analysis of e-Dialogue conversations and uses empirical methodology to minimize observer bias during analysis of a conversation transcript. NDCMT elucidates emergent ideas, identifies connections between ideas and themes, and provides a coherent synthesis and deeper understanding of the underlying patterns of online conversations. Continual application and improvement of NDCMT can lead to powerful methodologies for empirically analyzing digital discourse and better capture of innovations produced through such discourse. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs140221

  3. Vector machine techniques for modeling of seismic liquefaction data

    Directory of Open Access Journals (Sweden)

    Pijush Samui

    2014-06-01

    Full Text Available This article employs three soft computing techniques, Support Vector Machine (SVM; Least Square Support Vector Machine (LSSVM and Relevance Vector Machine (RVM, for prediction of liquefaction susceptibility of soil. SVM and LSSVM are based on the structural risk minimization (SRM principle which seeks to minimize an upper bound of the generalization error consisting of the sum of the training error and a confidence interval. RVM is a sparse Bayesian kernel machine. SVM, LSSVM and RVM have been used as classification tools. The developed SVM, LSSVM and RVM give equations for prediction of liquefaction susceptibility of soil. A comparative study has been carried out between the developed SVM, LSSVM and RVM models. The results from this article indicate that the developed SVM gives the best performance for prediction of liquefaction susceptibility of soil.

  4. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  5. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  6. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  7. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  8. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  9. A Critical Review of Model-Based Economic Studies of Depression: Modelling Techniques, Model Structure and Data Sources

    OpenAIRE

    Hossein Haji Ali Afzali; Jonathan Karnon; Jodi Gray

    2012-01-01

    Depression is the most common mental health disorder and is recognized as a chronic disease characterized by multiple acute episodes/relapses. Although modelling techniques play an increasingly important role in the economic evaluation of depression interventions, comparatively little attention has been paid to issues around modelling studies with a focus on potential biases. This, however, is important as different modelling approaches, variations in model structure and input parameters may ...

  10. Adaptive Atmospheric Modeling Key Techniques in Grid Generation, Data Structures, and Numerical Operations with Applications

    CERN Document Server

    Behrens, Jörn

    2006-01-01

    Gives an overview and guidance in the development of adaptive techniques for atmospheric modeling. This book covers paradigms of adaptive techniques, such as error estimation and adaptation criteria. Considering applications, it demonstrates several techniques for discretizing relevant conservation laws from atmospheric modeling.

  11. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  12. Constitutional Model and Rationality in Judicial Decisions from Proportionality Technique

    OpenAIRE

    Feio, Thiago Alves

    2016-01-01

    In the current legal systems, the content of the Constitutions consists of values that serve to limit state action. The department in charge of the control of this system is, usually, the Judiciary. This choice leads to two major problems, the tension between democracy and constitutionalism and the subjectivity that control. One of the solutions to subjectivity is weighting of principles through the proportionality technique, which aims to give rational decisions. This technique doesn’t elimi...

  13. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  14. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  15. Modeling technique for the process of liquid film disintegration

    Science.gov (United States)

    Modorskii, V. Ya.; Sipatov, A. M.; Babushkina, A. V.; Kolodyazhny, D. Yu.; Nagorny, V. S.

    2016-10-01

    In the course of numerical experiments the method of calculation of two-phase flows was developed by solving a model problem. The results of the study were compared between the two models that describe the processes of two-phase flow and the collapse of the liquid jet into droplets. VoF model and model QMOM - two mathematical models were considered the implementation of the spray.

  16. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  17. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  18. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  19. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...

  20. (NHIS) using data mining technique as a statistical model

    African Journals Online (AJOL)

    kofi.mereku

    2014-05-23

    May 23, 2014 ... Scheme (NHIS) claims in the Awutu-Effutu-Senya District using data mining techniques, with a specific focus on .... transform them into a format that is friendly to data mining algorithms, such as .... many groups to access the data, facilitate updating the data, and improve the efficiency of checking the data for ...

  1. Use of System Dynamics Techniques in the Garrison Health Modelling Tool

    Science.gov (United States)

    2010-11-01

    Joint Health Command (JHC) tasked DSTO to develop techniques for modelling Defence health service delivery both in a Garrison environment in Australia ...UNCLASSIFIED UNCLASSIFIED Use of System Dynamics Techniques in the Garrison Health Modelling Tool Mark Burnett, Kerry Clifford and...Garrison Health Modelling Tool, a prototype software package designed to provide decision-support to JHC health officers and managers in a garrison

  2. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  3. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  4. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  5. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  6. Models, Web-Based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance

    National Research Council Canada - National Science Library

    Hill, Raymond

    2001-01-01

    ... Laboratory, Logistics Research Division, Logistics Readiness Branch to propose a research agenda entitled, "Models, Web-based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance...

  7. Modelling Data Mining Dynamic Code Attributes with Scheme Definition Technique

    OpenAIRE

    Sipayung, Evasaria M; Fiarni, Cut; Tanudjaja, Randy

    2014-01-01

    Data mining is a technique used in differentdisciplines to search for significant relationships among variablesin large data sets. One of the important steps on data mining isdata preparation. On these step, we need to transform complexdata with more than one attributes into representative format fordata mining algorithm. In this study, we concentrated on thedesigning a proposed system to fetch attributes from a complexdata such as product ID. Then the proposed system willdetermine the basic ...

  8. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  9. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  10. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  11. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  12. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  13. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  14. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    Science.gov (United States)

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  15. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  16. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions......-passive behaviour of the proposed method comes from the combination of the non intrusive behaviour of the passive methods with a better accuracy of the active methods. The simulation results reveal the good accuracy of the proposed method....

  17. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  18. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  19. Modeling and Control of Multivariable Process Using Intelligent Techniques

    Directory of Open Access Journals (Sweden)

    Subathra Balasubramanian

    2010-10-01

    Full Text Available For nonlinear dynamic systems, the first principles based modeling and control is difficult to implement. In this study, a fuzzy controller and recurrent fuzzy controller are developed for MIMO process. Fuzzy logic controller is a model free controller designed based on the knowledge about the process. In fuzzy controller there are two types of rule-based fuzzy models are available: one the linguistic (Mamdani model and the other is Takagi–Sugeno model. Of these two, Takagi-Sugeno model (TS has attracted most attention. The fuzzy controller application is limited to static processes due to their feedforward structure. But, most of the real-time processes are dynamic and they require the history of input/output data. In order to store the past values a memory unit is needed, which is introduced by the recurrent structure. The proposed recurrent fuzzy structure is used to develop a controller for the two tank heating process. Both controllers are designed and implemented in a real time environment and their performance is compared.

  20. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  1. A study on the modeling techniques using LS-INGRID

    International Nuclear Information System (INIS)

    Ku, J. H.; Park, S. W.

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions

  2. GENERALIZATION TECHNIQUE FOR 2D+SCALE DHE DATA MODEL

    Directory of Open Access Journals (Sweden)

    H. Karim

    2016-10-01

    Full Text Available Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information in scale dimension could be used for the future 3D-scale applications.

  3. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  4. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  5. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... though extant literature has shown the importance of formal modelling techniques, the impact of utilising these techniques remains relatively unknown. Therefore, this article studies three main areas: (1) the impact of using modelling techniques based on Unified Modelling Language (UML), in which...... ability to reduce the number of product variants. This paper contributes to an increased understanding of what companies can gain from using more formalised modelling techniques in configurator projects, and under what circumstances they should be used....

  6. Biliary System Architecture: Experimental Models and Visualization Techniques

    Czech Academy of Sciences Publication Activity Database

    Sarnová, Lenka; Gregor, Martin

    2017-01-01

    Roč. 66, č. 3 (2017), s. 383-390 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LQ1604; GA ČR GA15-23858S Institutional support: RVO:68378050 Keywords : Biliary system * Mouse model * Cholestasis * Visualisation * Morphology Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 1.461, year: 2016

  7. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  8. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  9. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    DA shows all seven parameters (CO, O3, PM10, SO2, NOx, NO and NO2) gave the most significant variables after stepwise backward mode. PCA identifies the major source of air pollution is due to combustion of fossil fuels in motor vehicles and industrial activities. The ANN model shows a better prediction compared to the ...

  10. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  11. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  12. Techniques for studies of unbinned model independent CP violation

    Energy Technology Data Exchange (ETDEWEB)

    Bedford, Nicholas; Weisser, Constantin; Parkes, Chris; Gersabeck, Marco; Brodzicka, Jolanta; Chen, Shanzhen [University of Manchester (United Kingdom)

    2016-07-01

    Charge-Parity (CP) violation is a known part of the Standard Model and has been observed and measured in both the B and K meson systems. The observed levels, however, are insufficient to explain the observed matter-antimatter asymmetry in the Universe, and so other sources need to be found. One area of current investigation is the D meson system, where predicted levels of CP violation are much lower than in the B and K meson systems. This means that more sensitive methods are required when searching for CP violation in this system. Several unbinned model independent methods have been proposed for this purpose, all of which need to be optimised and their sensitivities compared.

  13. Suitability of sheet bending modelling techniques in CAPP applications

    OpenAIRE

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and FEM simulations are discussed against the background of the required predictable accuracy in small-batch part manufacturing and FMS environments. The topics are limited to those relevant to bending...

  14. Solving microwave heating model using Hermite-Pade approximation technique

    International Nuclear Information System (INIS)

    Makinde, O.D.

    2005-11-01

    We employ the Hermite-Pade approximation method to explicitly construct the approximate solution of steady state reaction- diffusion equations with source term that arises in modeling microwave heating in an infinite slab with isothermal walls. In particular, we consider the case where the source term decreases spatially and increases with temperature. The important properties of the temperature fields including bifurcations and thermal criticality are discussed. (author)

  15. PLATO: PSF modelling using a micro-scanning technique

    Directory of Open Access Journals (Sweden)

    Ouazzani R-M.

    2015-01-01

    Full Text Available The PLATO space mission is designed to detect telluric planets in the habitable zone of solar type stars, and simultaneously characterise the host star using ultra high precision photometry. The photometry will be performed on board using weighted masks. However, to reach the required precision, corrections will have to be performed by the ground segment and will rely on precise knowledge of the instrument PSF (Point Spread Function. We here propose to model the PSF using a microscanning method.

  16. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    Science.gov (United States)

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  17. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  18. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  19. Robust image modeling technique with a bioluminescence image segmentation application

    Science.gov (United States)

    Zhong, Jianghong; Wang, Ruiping; Tian, Jie

    2009-02-01

    A robust pattern classifier algorithm for the variable symmetric plane model, where the driving noise is a mixture of a Gaussian and an outlier process, is developed. The veracity and high-speed performance of the pattern recognition algorithm is proved. Bioluminescence tomography (BLT) has recently gained wide acceptance in the field of in vivo small animal molecular imaging. So that it is very important for BLT to how to acquire the highprecision region of interest in a bioluminescence image (BLI) in order to decrease loss of the customers because of inaccuracy in quantitative analysis. An algorithm in the mode is developed to improve operation speed, which estimates parameters and original image intensity simultaneously from the noise corrupted image derived from the BLT optical hardware system. The focus pixel value is obtained from the symmetric plane according to a more realistic assumption for the noise sequence in the restored image. The size of neighborhood is adaptive and small. What's more, the classifier function is base on the statistic features. If the qualifications for the classifier are satisfied, the focus pixel intensity is setup as the largest value in the neighborhood.Otherwise, it will be zeros.Finally,pseudo-color is added up to the result of the bioluminescence segmented image. The whole process has been implemented in our 2D BLT optical system platform and the model is proved.

  20. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  1. Precision and trueness of dental models manufactured with different 3-dimensional printing techniques.

    Science.gov (United States)

    Kim, Soo-Yeon; Shin, Yoo-Seok; Jung, Hwi-Dong; Hwang, Chung-Ju; Baik, Hyoung-Seon; Cha, Jung-Yul

    2018-01-01

    In this study, we assessed the precision and trueness of dental models printed with 3-dimensional (3D) printers via different printing techniques. Digital reference models were printed 5 times using stereolithography apparatus (SLA), digital light processing (DLP), fused filament fabrication (FFF), and the PolyJet technique. The 3D printed models were scanned and evaluated for tooth, arch, and occlusion measurements. Precision and trueness were analyzed with root mean squares (RMS) for the differences in each measurement. Differences in measurement variables among the 3D printing techniques were analyzed by 1-way analysis of variance (α = 0.05). Except in trueness of occlusion measurements, there were significant differences in all measurements among the 4 techniques (P techniques exhibited significantly different mean RMS values of precision than the SLA (88 ± 14 μm) and FFF (99 ± 14 μm) techniques (P techniques (P techniques (P techniques: SLA (107 ± 11 μm), DLP (143 ± 8 μm), FFF (188 ± 14 μm), and PolyJet (78 ± 9 μm) (P techniques exhibited significantly different mean RMS values of trueness than DLP (469 ± 49 μm) and FFF (409 ± 36 μm) (P techniques showed significant differences in precision of all measurements and in trueness of tooth and arch measurements. The PolyJet and DLP techniques were more precise than the FFF and SLA techniques, with the PolyJet technique having the highest accuracy. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  2. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  3. Using simulation models to evaluate ape nest survey techniques.

    Directory of Open Access Journals (Sweden)

    Ryan H Boyko

    Full Text Available BACKGROUND: Conservationists frequently use nest count surveys to estimate great ape population densities, yet the accuracy and precision of the resulting estimates are difficult to assess. METHODOLOGY/PRINCIPAL FINDINGS: We used mathematical simulations to model nest building behavior in an orangutan population to compare the quality of the population size estimates produced by two of the commonly used nest count methods, the 'marked recount method' and the 'matrix method.' We found that when observers missed even small proportions of nests in the first survey, the marked recount method produced large overestimates of the population size. Regardless of observer reliability, the matrix method produced substantial overestimates of the population size when surveying effort was low. With high observer reliability, both methods required surveying approximately 0.26% of the study area (0.26 km(2 out of 100 km(2 in this simulation to achieve an accurate estimate of population size; at or above this sampling effort both methods produced estimates within 33% of the true population size 50% of the time. Both methods showed diminishing returns at survey efforts above 0.26% of the study area. The use of published nest decay estimates derived from other sites resulted in widely varying population size estimates that spanned nearly an entire order of magnitude. The marked recount method proved much better at detecting population declines, detecting 5% declines nearly 80% of the time even in the first year of decline. CONCLUSIONS/SIGNIFICANCE: These results highlight the fact that neither nest surveying method produces highly reliable population size estimates with any reasonable surveying effort, though either method could be used to obtain a gross population size estimate in an area. Conservation managers should determine if the quality of these estimates are worth the money and effort required to produce them, and should generally limit surveying effort to

  4. Full Semantics Preservation in Model Transformation - A Comparison of Proof Techniques

    NARCIS (Netherlands)

    Hülsbusch, Mathias; König, Barbara; Rensink, Arend; Semenyak, Maria; Soltenborn, Christian; Wehrheim, Heike

    Model transformation is a prime technique in modern, model-driven software design. One of the most challenging issues is to show that the semantics of the models is not affected by the transformation. So far, there is hardly any research into this issue, in particular in those cases where the source

  5. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  6. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  7. Prediction of intracranial findings on CT-scans by alternative modelling techniques

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); M. Smits (Marion); D.W.J. Dippel (Diederik); M.G.M. Hunink (Myriam); E.W. Steyerberg (Ewout)

    2011-01-01

    textabstractBackground: Prediction rules for intracranial traumatic findings in patients with minor head injury are designed to reduce the use of computed tomography (CT) without missing patients at risk for complications. This study investigates whether alternative modelling techniques might

  8. A Shell/3D Modeling Technique for the Analysis of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2000-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with shell finite elements. Double Cantilever Beam, End Notched Flexure, and Single Leg Bending specimens were analyzed first using full 3D finite element models to obtain reference solutions. Mixed mode strain energy release rate distributions were computed using the virtual crack closure technique. The analyses were repeated using the shell/3D technique to study the feasibility for pure mode I, mode II and mixed mode I/II cases. Specimens with a unidirectional layup and with a multidirectional layup were simulated. For a local 3D model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  9. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  10. Propulsion modeling techniques and applications for the NASA Dryden X-30 real-time simulator

    Science.gov (United States)

    Hicks, John W.

    1991-01-01

    An overview is given of the flight planning activities to date in the current National Aero-Space Plane (NASP) program. The government flight-envelope expansion concept and other design flight operational assessments are discussed. The NASA Dryden NASP real-time simulator configuration is examined and hypersonic flight planning simulation propulsion modeling requirements are described. The major propulsion modeling techniques developed by the Edwards flight test team are outlined, and the application value of techniques for developmental hypersonic vehicles are discussed.

  11. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  12. Validation of a musculoskeletal model of lifting and its application for biomechanical evaluation of lifting techniques.

    Science.gov (United States)

    Mirakhorlo, Mojtaba; Azghani, Mahmood Reza; Kahrizi, Sedighe

    2014-01-01

    Lifting methods, including standing stance and techniques have wide effects on spine loading and stability. Previous studies explored lifting techniques in many biomechanical terms and documented changes in muscular and postural response of body as a function of techniques .However, the impact of standing stance and lifting technique on human musculoskeletal had not been investigated concurrently. A whole body musculoskeletal model of lifting had been built in order to evaluate standing stance impact on muscle activation patterns and spine loading during each distinctive lifting technique. Verified model had been used in different stances width during squat, stoop and semi-squat lifting for examining the effect of standing stance on each lifting technique. The model muscle's activity was validated by experimental muscle EMGs resulting in Pearson's coefficients of greater than 0.8. Results from analytical analyses show that the effect of stance width on biomechanical parameters consists in the lifting technique, depending on what kind of standing stance was used. Standing stance in each distinctive lifting technique exhibit positive and negative aspects and it can't be recommended either one as being better in terms of biomechanical parameters.

  13. New sunshine-based models for predicting global solar radiation using PSO (particle swarm optimization) technique

    International Nuclear Information System (INIS)

    Behrang, M.A.; Assareh, E.; Noghrehabadi, A.R.; Ghanbarzadeh, A.

    2011-01-01

    PSO (particle swarm optimization) technique is applied to estimate monthly average daily GSR (global solar radiation) on horizontal surface for different regions of Iran. To achieve this, five new models were developed as well as six models were chosen from the literature. First, for each city, the empirical coefficients for all models were separately determined using PSO technique. The results indicate that new models which are presented in this study have better performance than existing models in the literature for 10 cities from 17 considered cities in this study. It is also shown that the empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. Some case studies are presented to demonstrate this generalization with the result showing good agreement with the measurements. More importantly, these case studies further validate the models developed, and demonstrate the general applicability of the models developed. Finally, the obtained results of PSO technique were compared with the obtained results of SRTs (statistical regression techniques) on Angstrom model for all 17 cities. The results showed that obtained empirical coefficients for Angstrom model based on PSO have more accuracy than SRTs for all 17 cities. -- Highlights: → The first study to apply an intelligent optimization technique to more accurately determine empirical coefficients in solar radiation models. → New models which are presented in this study have better performance than existing models. → The empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. → A fair comparison between the performance of PSO and SRTs on GSR modeling.

  14. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  15. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  16. Large wind power plants modeling techniques for power system simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Larose, Christian; Gagnon, Richard; Turmel, Gilbert; Giroux, Pierre; Brochu, Jacques [IREQ Hydro-Quebec Research Institute, Varennes, QC (Canada); McNabb, Danielle; Lefebvre, Daniel [Hydro-Quebec TransEnergie, Montreal, QC (Canada)

    2009-07-01

    This paper presents efficient modeling techniques for the simulation of large wind power plants in the EMT domain using a parallel supercomputer. Using these techniques, large wind power plants can be simulated in detail, with each wind turbine individually represented, as well as the collector and receiving network. The simulation speed of the resulting models is fast enough to perform both EMT and transient stability studies. The techniques are applied to develop an EMT detailed model of a generic wind power plant consisting of 73 x 1.5-MW doubly-fed induction generator (DFIG) wind turbine. Validation of the modeling techniques is presented using a comparison with a Matlab/SimPowerSystems simulation. To demonstrate the simulation capabilities using these modeling techniques, simulations involving a 120-bus receiving network with two generic wind power plants (146 wind turbines) are performed. The complete system is modeled using the Hypersim simulator and Matlab/SimPowerSystems. The simulations are performed on a 32-processor supercomputer using an EMTP-like solution with a time step of 18.4 {mu}s. The simulation performance is 10 times slower than in real-time, which is a huge gain in performance compared to traditional tools. The simulation is designed to run in real-time so it never stops, resulting in a capability to perform thousand of tests via automatic testing tools. (orig.)

  17. Real-time reservoir geological model updating using the hybrid EnKF and geostatistical technique

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Chen, S.; Yang, D. [Regina Univ., SK (Canada). Petroleum Technology Research Centre

    2008-07-01

    Reservoir simulation plays an important role in modern reservoir management. Multiple geological models are needed in order to analyze the uncertainty of a given reservoir development scenario. Ideally, dynamic data should be incorporated into a reservoir geological model. This can be done by using history matching and tuning the model to match the past performance of reservoir history. This study proposed an assisted history matching technique to accelerate and improve the matching process. The Ensemble Kalman Filter (EnKF) technique, which is an efficient assisted history matching method, was integrated with a conditional geostatistical simulation technique to dynamically update reservoir geological models. The updated models were constrained to dynamic data, such as reservoir pressure and fluid saturations, and approaches geologically realistic at each time step by using the EnKF technique. The new technique was successfully applied in a heterogeneous synthetic reservoir. The uncertainty of the reservoir characterization was significantly reduced. More accurate forecasts were obtained from the updated models. 3 refs., 2 figs.

  18. Uncertainty analysis in rainfall-runoff modelling : Application of machine learning techniques

    NARCIS (Netherlands)

    Shrestha, D.l.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  19. Uncertainty Analysis in Rainfall-Runoff Modelling: Application of Machine Learning Techniques

    NARCIS (Netherlands)

    Shrestha, D.L.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  20. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  1. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  2. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    Science.gov (United States)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  3. Generation of 3-D finite element models of restored human teeth using micro-CT techniques.

    NARCIS (Netherlands)

    Verdonschot, N.J.J.; Fennis, W.M.M.; Kuys, R.H.; Stolk, J.; Kreulen, C.M.; Creugers, N.H.J.

    2001-01-01

    PURPOSE: This article describes the development of a three-dimensional finite element model of a premolar based on a microscale computed tomographic (CT) data-acquisition technique. The development of the model is part of a project studying the optimal design and geometry of adhesive tooth-colored

  4. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  5. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  6. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  7. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  8. Modeling techniques used in the communications link analysis and simulation system (CLASS)

    Science.gov (United States)

    Braun, W. R.; Mckenzie, T. M.

    1985-01-01

    CLASS (Communications Link Analysis and Simulation System) is a software package developed for NASA to predict the communication and tracking performance of the Tracking and Data Relay Satellite System (TDRSS) services. The modeling techniques used in CLASS are described. The components of TDRSS and the performance parameters to be computed by CLASS are too diverse to permit the use of a single technique to evaluate all performance measures. Hence, each CLASS module applies the modeling approach best suited for a particular subsystem and/or performance parameter in terms of model accuracy and computational speed.

  9. Comparison of bag-valve-mask hand-sealing techniques in a simulated model.

    Science.gov (United States)

    Otten, David; Liao, Michael M; Wolken, Robert; Douglas, Ivor S; Mishra, Ramya; Kao, Amanda; Barrett, Whitney; Drasler, Erin; Byyny, Richard L; Haukoos, Jason S

    2014-01-01

    Bag-valve-mask ventilation remains an essential component of airway management. Rescuers continue to use both traditional 1- or 2-handed mask-face sealing techniques, as well as a newer modified 2-handed technique. We compare the efficacy of 1-handed, 2-handed, and modified 2-handed bag-valve-mask technique. In this prospective, crossover study, health care providers performed 1-handed, 2-handed, and modified 2-handed bag-valve-mask ventilation on a standardized ventilation model. Subjects performed each technique for 5 minutes, with 3 minutes' rest between techniques. The primary outcome was expired tidal volume, defined as percentage of total possible expired tidal volume during a 5-minute bout. A specialized inline monitor measured expired tidal volume. We compared 2-handed versus modified 2-handed and 2-handed versus 1-handed techniques. We enrolled 52 subjects: 28 (54%) men, 32 (62%) with greater than or equal to 5 actual emergency bag-valve-mask situations. Median expired tidal volume percentage for 1-handed technique was 31% (95% confidence interval [CI] 17% to 51%); for 2-handed technique, 85% (95% CI 78% to 91%); and for modified 2-handed technique, 85% (95% CI 82% to 90%). Both 2-handed (median difference 47%; 95% CI 34% to 62%) and modified 2-handed technique (median difference 56%; 95% CI 29% to 65%) resulted in significantly higher median expired tidal volume percentages compared with 1-handed technique. The median expired tidal volume percentages between 2-handed and modified 2-handed techniques did not significantly differ from each other (median difference 0; 95% CI -2% to 2%). In a simulated model, both 2-handed mask-face sealing techniques resulted in higher ventilatory tidal volumes than 1-handed technique. Tidal volumes from 2-handed and modified 2-handed techniques did not differ. Rescuers should perform bag-valve-mask ventilation with 2-handed techniques. Copyright © 2013 American College of Emergency Physicians. Published by Mosby

  10. A novel model surgery technique for LeFort III advancement.

    Science.gov (United States)

    Vachiramon, Amornpong; Yen, Stephen L-K; Lypka, Michael; Bindignavale, Vijay; Hammoudeh, Jeffrey; Reinisch, John; Urata, Mark M

    2007-09-01

    Current techniques for model surgery and occlusal splint fabrication lack the ability to mark, measure and plan the position of the orbital rim for LeFort III and Monobloc osteotomies. This report describes a model surgery technique for planning the three dimensional repositioning of the orbital rims. Dual orbital pointers were used to mark the infraorbital rim during the facebow transfer. These pointer positions were transferred onto the surgical models in order to follow splint-determined movements. Case reports are presented to illustrate how the model surgery technique was used to differentiate the repositioning of the orbital rim from the occlusal correction in single segment and combined LeFort III/LeFort I osteotomies.

  11. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  12. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  13. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  14. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  15. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  16. Two-dimensional gel electrophoresis image registration using block-matching techniques and deformation models.

    Science.gov (United States)

    Rodriguez, Alvaro; Fernandez-Lozano, Carlos; Dorado, Julian; Rabuñal, Juan R

    2014-06-01

    Block-matching techniques have been widely used in the task of estimating displacement in medical images, and they represent the best approach in scenes with deformable structures such as tissues, fluids, and gels. In this article, a new iterative block-matching technique-based on successive deformation, search, fitting, filtering, and interpolation stages-is proposed to measure elastic displacements in two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) images. The proposed technique uses different deformation models in the task of correlating proteins in real 2D electrophoresis gel images, obtaining an accuracy of 96.6% and improving the results obtained with other techniques. This technique represents a general solution, being easy to adapt to different 2D deformable cases and providing an experimental reference for block-matching algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A Shell/3D Modeling Technique for the Analyses of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2001-01-01

    A shell/3D modeling technique was developed for which a local three-dimensional solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local three-dimensional model and the global structural model which has been meshed with plate or shell finite elements. Double Cantilever Beam (DCB), End Notched Flexure (ENF), and Single Leg Bending (SLB) specimens were modeled using the shell/3D technique to study the feasibility for pure mode I (DCB), mode II (ENF) and mixed mode I/II (SLB) cases. Mixed mode strain energy release rate distributions were computed across the width of the specimens using the virtual crack closure technique. Specimens with a unidirectional layup and with a multidirectional layup where the delamination is located between two non-zero degree plies were simulated. For a local three-dimensional model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  18. Modelling techniques for underwater noise generated by tidal turbines in shallow water

    OpenAIRE

    Lloyd, Thomas P.; Turnock, Stephen R.; Humphrey, Victor F.

    2011-01-01

    The modelling of underwater noise sources and their potential impact on the marine environment is considered, focusing on tidal turbines in shallow water. The requirement for device noise prediction as part of environmental impact assessment is outlined and the limited amount of measurement data and modelling research identified. Following the identification of potential noise sources, the dominant flowgenerated sources are modelled using empirical techniques. The predicted sound pressure lev...

  19. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Sreeramana Aithal

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or an operational concept/idea or business system. Based on four constructs Advantages...

  20. Car sharing demand estimation and urban transport demand modelling using stated preference techniques

    OpenAIRE

    Catalano, Mario; Lo Casto, Barbara; Migliore, Marco

    2008-01-01

    The research deals with the use of the stated preference technique (SP) and transport demand modelling to analyse travel mode choice behaviour for commuting urban trips in Palermo, Italy. The principal aim of the study was the calibration of a demand model to forecast the modal split of the urban transport demand, allowing for the possibility of using innovative transport systems like car sharing and car pooling. In order to estimate the demand model parameters, a specific survey was carried ...

  1. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  2. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  3. A novel CT acquisition and analysis technique for breathing motion modeling

    International Nuclear Information System (INIS)

    Low, Daniel A; White, Benjamin M; Lee, Percy P; Thomas, David H; Gaudio, Sergio; Jani, Shyam S; Wu, Xiao; Lamb, James M

    2013-01-01

    To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques. (fast track communication)

  4. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  5. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  6. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals.

    Science.gov (United States)

    Potnis, Prashant R; Tsou, Nien-Ti; Huber, John E

    2011-02-16

    The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  7. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  8. Application of rapid prototyping techniques for modelling of anatomical structures in medical training and education.

    Science.gov (United States)

    Torres, K; Staśkiewicz, G; Śnieżyński, M; Drop, A; Maciejewski, R

    2011-02-01

    Rapid prototyping has become an innovative method of fast and cost-effective production of three-dimensional models for manufacturing. Wide access to advanced medical imaging methods allows application of this technique for medical training purposes. This paper presents the feasibility of rapid prototyping technologies: stereolithography, selective laser sintering, fused deposition modelling, and three-dimensional printing for medical education. Rapid prototyping techniques are a promising method for improvement of anatomical education in medical students but also a valuable source of training tools for medical specialists.

  9. A Shell/3D Modeling Technique for Delaminations in Composite Laminates

    Science.gov (United States)

    Krueger, Ronald

    1999-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provide a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with plate or shell finite elements. For simple double cantilever beam (DCB), end notched flexure (ENF), and single leg bending (SLB) specimens, mixed mode energy release rate distributions were computed across the width from nonlinear finite element analyses using the virtual crack closure technique. The analyses served to test the accuracy of the shell/3D technique for the pure mode I case (DCB), mode II case (ENF) and a mixed mode I/II case (SLB). Specimens with a unidirectional layup where the delamination is located between two 0 plies, as well as a multidirectional layup where the delamination is located between two non-zero degree plies, were simulated. For a local 3D model extending to a minimum of about three specimen thicknesses in front of and behind the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  10. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  11. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  12. Hybrid models for hydrological forecasting : Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  13. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    Science.gov (United States)

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  14. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  15. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  16. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  17. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  18. A review of techniques for spatial modeling in geographical, conservation and landscape genetics.

    Science.gov (United States)

    Diniz-Filho, José Alexandre Felizola; Nabout, João Carlos; de Campos Telles, Mariana Pires; Soares, Thannya Nascimento; Rangel, Thiago Fernando L V B

    2009-04-01

    Most evolutionary processes occur in a spatial context and several spatial analysis techniques have been employed in an exploratory context. However, the existence of autocorrelation can also perturb significance tests when data is analyzed using standard correlation and regression techniques on modeling genetic data as a function of explanatory variables. In this case, more complex models incorporating the effects of autocorrelation must be used. Here we review those models and compared their relative performances in a simple simulation, in which spatial patterns in allele frequencies were generated by a balance between random variation within populations and spatially-structured gene flow. Notwithstanding the somewhat idiosyncratic behavior of the techniques evaluated, it is clear that spatial autocorrelation affects Type I errors and that standard linear regression does not provide minimum variance estimators. Due to its flexibility, we stress that principal coordinate of neighbor matrices (PCNM) and related eigenvector mapping techniques seem to be the best approaches to spatial regression. In general, we hope that our review of commonly used spatial regression techniques in biology and ecology may aid population geneticists towards providing better explanations for population structures dealing with more complex regression problems throughout geographic space.

  19. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  1. New techniques for the analysis of manual control systems. [mathematical models of human operator behavior

    Science.gov (United States)

    Bekey, G. A.

    1971-01-01

    Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.

  2. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  3. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  4. Coupled Numerical Methods to Analyze Interacting Acoustic-Dynamic Models by Multidomain Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Delfim Soares

    2011-01-01

    Full Text Available In this work, coupled numerical analysis of interacting acoustic and dynamic models is focused. In this context, several numerical methods, such as the finite difference method, the finite element method, the boundary element method, meshless methods, and so forth, are considered to model each subdomain of the coupled model, and multidomain decomposition techniques are applied to deal with the coupling relations. Two basic coupling algorithms are discussed here, namely the explicit direct coupling approach and the implicit iterative coupling approach, which are formulated based on explicit/implicit time-marching techniques. Completely independent spatial and temporal discretizations among the interacting subdomains are permitted, allowing optimal discretization for each sub-domain of the model to be considered. At the end of the paper, numerical results are presented, illustrating the performance and potentialities of the discussed methodologies.

  5. Animal models in bariatric surgery--a review of the surgical techniques and postsurgical physiology.

    Science.gov (United States)

    Rao, Raghavendra S; Rao, Venkatesh; Kini, Subhash

    2010-09-01

    Bariatric surgery is considered the most effective current treatment for morbid obesity. Since the first publication of an article by Kremen, Linner, and Nelson, many experiments have been performed using animal models. The initial experiments used only malabsorptive procedures like intestinal bypass which have largely been abandoned now. These experimental models have been used to assess feasibility and safety as well as to refine techniques particular to each procedure. We will discuss the surgical techniques and the postsurgical physiology of the four major current bariatric procedures (namely, Roux-en-Y gastric bypass, gastric banding, sleeve gastrectomy, and biliopancreatic diversion). We have also reviewed the anatomy and physiology of animal models. We have reviewed the literature and presented it such that it would be a reference to an investigator interested in animal experiments in bariatric surgery. Experimental animal models are further divided into two categories: large mammals that include dogs, cats, rabbits, and pig and small mammals that include rats and mice.

  6. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  7. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  8. Assessing the validity of two indirect questioning techniques: A Stochastic Lie Detector versus the Crosswise Model.

    Science.gov (United States)

    Hoffmann, Adrian; Musch, Jochen

    2016-09-01

    Estimates of the prevalence of sensitive attributes obtained through direct questions are prone to being distorted by untruthful responding. Indirect questioning procedures such as the Randomized Response Technique (RRT) aim to control for the influence of social desirability bias. However, even on RRT surveys, some participants may disobey the instructions in an attempt to conceal their true status. In the present study, we experimentally compared the validity of two competing indirect questioning techniques that presumably offer a solution to the problem of nonadherent respondents: the Stochastic Lie Detector and the Crosswise Model. For two sensitive attributes, both techniques met the "more is better" criterion. Their application resulted in higher, and thus presumably more valid, prevalence estimates than a direct question. Only the Crosswise Model, however, adequately estimated the known prevalence of a nonsensitive control attribute.

  9. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  10. Similitude Conditions Modeling Geosynthetic-Reinforced Piled Embankments Using FEM and FDM Techniques

    OpenAIRE

    Jennings, Keith; Naughton, Patrick J.

    2012-01-01

    The numerical modelling of geosynthetic-reinforced piled embankments using both the finite element method (FEM) and finite difference method (FDM) are compared. Plaxis 2D (FEM) was utilized to replicate FLAC (FDM) analysis originally presented by Han and Gabr on a unit cell axisymmetric model within a geosynthetic reinforced piled embankment (GRPE). The FEM and FED techniques were found to be in reasonable agreement, in both characteristic trend and absolute value. FEM consistently replicated...

  11. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  12. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  13. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  14. Analysis of fluidized bed granulation process using conventional and novel modeling techniques.

    Science.gov (United States)

    Petrović, Jelena; Chansanroj, Krisanin; Meier, Brigitte; Ibrić, Svetlana; Betz, Gabriele

    2011-10-09

    Various modeling techniques have been applied to analyze fluidized-bed granulation process. Influence of various input parameters (product, inlet and outlet air temperature, consumption of liquid-binder, granulation liquid-binder spray rate, spray pressure, drying time) on granulation output properties (granule flow rate, granule size determined using light scattering method and sieve analysis, granules Hausner ratio, porosity and residual moisture) has been assessed. Both conventional and novel modeling techniques were used, such as screening test, multiple regression analysis, self-organizing maps, artificial neural networks, decision trees and rule induction. Diverse testing of developed models (internal and external validation) has been discussed. Good correlation has been obtained between the predicted and the experimental data. It has been shown that nonlinear methods based on artificial intelligence, such as neural networks, are far better in generalization and prediction in comparison to conventional methods. Possibility of usage of SOMs, decision trees and rule induction technique to monitor and optimize fluidized-bed granulation process has also been demonstrated. Obtained findings can serve as guidance to implementation of modeling techniques in fluidized-bed granulation process understanding and control. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Application of modelling techniques in the food industry: determination of shelf-life for chilled foods

    NARCIS (Netherlands)

    Membré, J.M.; Johnston, M.D.; Bassett, J.; Naaktgeboren, G.; Blackburn, W.; Gorris, L.G.M.

    2005-01-01

    Microbiological modelling techniques (predictive microbiology, the Bayesian Markov Chain Monte Carlo method and a probability risk assessment approach) were combined to assess the shelf-life of an in-pack heat-treated, low-acid sauce intended to be marketed under chilled conditions. From a safety

  16. Reduced order modelling techniques for mesh movement strategies as applied to fluid structure interactions

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2010-01-01

    Full Text Available In this paper, we implement the method of Proper Orthogonal Decomposition (POD) to generate a reduced order model (ROM) of an optimization based mesh movement technique. In the study it is shown that POD can be used effectively to generate a ROM...

  17. New model reduction technique for a class of parabolic partial differential equations

    NARCIS (Netherlands)

    Vajta, Miklos

    1991-01-01

    A model reduction (or lumping) technique for a class of parabolic-type partial differential equations is given, and its application is discussed. The frequency response of the temperature distribution in any multilayer solid is developed and given by a matrix expression. The distributed transfer

  18. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  19. Flexible multibody simulation of automotive systems with non-modal model reduction techniques

    Science.gov (United States)

    Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter

    2012-12-01

    The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.

  20. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  1. Dimensional Analysis: an Elegant Technique for Facilitating the Teaching of Mathematical Modelling.

    Science.gov (United States)

    Fay, Temple H.; Joubert, Stephan V.

    2002-01-01

    Dimension analysis is promoted as a technique that promotes better understanding of the role of units and dimensions in mathematical modeling problems. Dimensional analysis is shown to lead to interesting systems of linear equations to solve, and can point the way to more quantitative analysis. Two student problems are discussed. (Author/MM)

  2. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    International Nuclear Information System (INIS)

    Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar

    2014-01-01

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range

  3. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  4. A MECHANISTIC MODEL FOR PARTICLE DEPOSITION IN DIESEL PARTICLUATE FILTERS USING THE LATTICE-BOLTZMANN TECHNIQUE

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Mark L.; Rector, David R.; Muntean, George G.; Maupin, Gary D.

    2004-08-01

    Cordierite diesel particulate filters (DPFs) offer one of the most promising aftertreatment technologies to meet the quickly approaching EPA 2007 heavy-duty emissions regulations. A critical, yet poorly understood, component of particulate filter modeling is the representation of soot deposition. The structure and distribution of soot deposits upon and within the ceramic substrate directly influence many of the macroscopic phenomenon of interest, including filtration efficiency, back pressure, and filter regeneration. Intrinsic soot cake properties such as packing density and permeability coefficients remain inadequately characterized. The work reported in this paper involves subgrid modeling techniques which may prove useful in resolving these inadequacies. The technique involves the use of a lattice Boltzmann modeling approach. This approach resolves length scales which are orders of magnitude below those typical of a standard computational fluid dynamics (CFD) representation of an aftertreatment device. Individual soot particles are introduced and tracked as they move through the flow field and are deposited on the filter substrate or previously deposited particles. Electron micrographs of actual soot deposits were taken and compared to the model predictions. Descriptions of the modeling technique and the development of the computational domain are provided. Preliminary results are presented, along with some comparisons with experimental observations.

  5. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  6. Reduced technique for modeling electromagnetic immunity on braid shielding cable bundles

    International Nuclear Information System (INIS)

    Xiao Pei; Du Ping-An; Nie Bao-Lin; Ren Dan

    2017-01-01

    In this paper, an efficient multi-conductor simplification technique is proposed to model the electromagnetic immunity on cable bundles within a braid shielding structure over a large frequency range. By grouping together the conductors based on the knowledge of Z -Smith chart, the required computation time is markedly reduced and the complexity of modeling the completely shielding cable bundles is significantly simplified with a good accuracy. After a brief description of the immunity problems in shielding structure, a six-phase procedure is detailed to generate the geometrical characteristics of the reduced cable bundles. Numerical simulation is carried out by using a commercial software CST to validate the efficiency and advantages of the proposed approach. The research addressed in this paper is considered as a simplified modeling technique for the electromagnetic immunity within a shielding structure. (paper)

  7. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...

  8. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  9. [Eco-value level classification model of forest ecosystem based on modified projection pursuit technique].

    Science.gov (United States)

    Wu, Chengzhen; Hong, Wei; Hong, Tao

    2006-03-01

    To optimize the projection function and direction of projection pursuit technique, predigest its realization process, and overcome the shortcomings in long time calculation and in the difficulty of optimizing projection direction and computer programming, this paper presented a modified simplex method (MSM), and based on it, brought forward the eco-value level classification model (EVLCM) of forest ecosystem, which could integrate the multidimensional classification index into one-dimensional projection value, with high projection value denoting high ecosystem services value. Examples of forest ecosystem could be reasonably classified by the new model according to their projection value, suggesting that EVLCM driven directly by samples data of forest ecosystem was simple and feasible, applicable, and maneuverable. The calculating time and value of projection function were 34% and 143% of those with the traditional projection pursuit technique, respectively. This model could be applied extensively to classify and estimate all kinds of non-linear and multidimensional data in ecology, biology, and regional sustainable development.

  10. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...... the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations...

  11. Optimization models and techniques for implementation and pricing of electricity markets

    International Nuclear Information System (INIS)

    Madrigal Martinez, M.

    2001-01-01

    The operation and planning of vertically integrated electric power systems can be optimized using models that simulate solutions to problems. As the electric power industry is going through a period of restructuring, there is a need for new optimization tools. This thesis describes the importance of optimization tools and presents techniques for implementing them. It also presents methods for pricing primary electricity markets. Three modeling groups are studied. The first considers a simplified continuous and discrete model for power pool auctions. The second considers the unit commitment problem, and the third makes use of a new type of linear network-constrained clearing system model for daily markets for power and spinning reserve. The newly proposed model considers bids for supply and demand and bilateral contracts. It is a direct current model for the transmission network

  12. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    Science.gov (United States)

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  13. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  14. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm......A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...

  15. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  16. River suspended sediment modelling using the CART model: A comparative study of machine learning techniques.

    Science.gov (United States)

    Choubin, Bahram; Darabi, Hamid; Rahmati, Omid; Sajedi-Hosseini, Farzaneh; Kløve, Bjørn

    2018-02-15

    Suspended sediment load (SSL) modelling is an important issue in integrated environmental and water resources management, as sediment affects water quality and aquatic habitats. Although classification and regression tree (CART) algorithms have been applied successfully to ecological and geomorphological modelling, their applicability to SSL estimation in rivers has not yet been investigated. In this study, we evaluated use of a CART model to estimate SSL based on hydro-meteorological data. We also compared the accuracy of the CART model with that of the four most commonly used models for time series modelling of SSL, i.e. adaptive neuro-fuzzy inference system (ANFIS), multi-layer perceptron (MLP) neural network and two kernels of support vector machines (RBF-SVM and P-SVM). The models were calibrated using river discharge, stage, rainfall and monthly SSL data for the Kareh-Sang River gauging station in the Haraz watershed in northern Iran, where sediment transport is a considerable issue. In addition, different combinations of input data with various time lags were explored to estimate SSL. The best input combination was identified through trial and error, percent bias (PBIAS), Taylor diagrams and violin plots for each model. For evaluating the capability of the models, different statistics such as Nash-Sutcliffe efficiency (NSE), Kling-Gupta efficiency (KGE) and percent bias (PBIAS) were used. The results showed that the CART model performed best in predicting SSL (NSE=0.77, KGE=0.8, PBIAS<±15), followed by RBF-SVM (NSE=0.68, KGE=0.72, PBIAS<±15). Thus the CART model can be a helpful tool in basins where hydro-meteorological data are readily available. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some...... previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally...... on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model...

  18. A review of cutting mechanics and modeling techniques for biological materials.

    Science.gov (United States)

    Takabi, Behrouz; Tai, Bruce L

    2017-07-01

    This paper presents a comprehensive survey on the modeling of tissue cutting, including both soft tissue and bone cutting processes. In order to achieve higher accuracy in tissue cutting, as a critical process in surgical operations, the meticulous modeling of such processes is important in particular for surgical tool development and analysis. This review paper is focused on the mechanical concepts and modeling techniques utilized to simulate tissue cutting such as cutting forces and chip morphology. These models are presented in two major categories, namely soft tissue cutting and bone cutting. Fracture toughness is commonly used to describe tissue cutting while Johnson-Cook material model is often adopted for bone cutting in conjunction with finite element analysis (FEA). In each section, the most recent mathematical and computational models are summarized. The differences and similarities among these models, challenges, novel techniques, and recommendations for future work are discussed along with each section. This review is aimed to provide a broad and in-depth vision of the methods suitable for tissue and bone cutting simulations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  20. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  1. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  2. Efficiency assessment of runoff harvesting techniques using a 3D coupled surface-subsurface hydrological model

    International Nuclear Information System (INIS)

    Verbist, K.; Cronelis, W. M.; McLaren, R.; Gabriels, D.; Soto, G.

    2009-01-01

    In arid and semi-arid zones runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Both in literature and in the field, a large variety of runoff collecting systems are found, as well as large variations in design and dimensions. Therefore, detailed measurements were performed on a semi-arid slope in central Chile to allow identification of the effect of a simple water harvesting technique on soil water availability. For this purpose, twenty two TDR-probes were installed and were monitored continuously during and after a simulated rainfall event. These data were used to calibrate the 3D distributed flow model HydroGeoSphere, to assess the runoff components and soil water retention as influenced by the water harvesting technique, both under simulated and natural rainfall conditions. (Author) 6 refs.

  3. Proposal of a congestion control technique in LAN networks using an econometric model ARIMA

    Directory of Open Access Journals (Sweden)

    Joaquín F Sánchez

    2017-01-01

    Full Text Available Hasty software development can produce immediate implementations with source code unnecessarily complex and hardly readable. These small kinds of software decay generate a technical debt that could be big enough to seriously affect future maintenance activities. This work presents an analysis technique for identifying architectural technical debt related to non-uniformity of naming patterns; the technique is based on term frequency over package hierarchies. The proposal has been evaluated on projects of two popular organizations, Apache and Eclipse. The results have shown that most of the projects have frequent occurrences of the proposed naming patterns, and using a graph model and aggregated data could enable the elaboration of simple queries for debt identification. The technique has features that favor its applicability on emergent architectures and agile software development.

  4. Analytical Modelling of the Effects of Different Gas Turbine Cooling Techniques on Engine Performance =

    Science.gov (United States)

    Uysal, Selcuk Can

    In this research, MATLAB SimulinkRTM was used to develop a cooled engine model for industrial gas turbines and aero-engines. The model consists of uncooled on-design, mean-line turbomachinery design and a cooled off-design analysis in order to evaluate the engine performance parameters by using operating conditions, polytropic efficiencies, material information and cooling system details. The cooling analysis algorithm involves a 2nd law analysis to calculate losses from the cooling technique applied. The model is used in a sensitivity analysis that evaluates the impacts of variations in metal Biot number, thermal barrier coating Biot number, film cooling effectiveness, internal cooling effectiveness and maximum allowable blade temperature on main engine performance parameters of aero and industrial gas turbine engines. The model is subsequently used to analyze the relative performance impact of employing Anti-Vortex Film Cooling holes (AVH) by means of data obtained for these holes by Detached Eddy Simulation-CFD Techniques that are valid for engine-like turbulence intensity conditions. Cooled blade configurations with AVH and other different external cooling techniques were used in a performance comparison study. (Abstract shortened by ProQuest.).

  5. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...... experiments need to be extrapolated to more general situations (e.g. to real houses or even to other soil–gas pollutants). Finally, models provide a cost-effective test bench for improved designs of radon prevention systems. The paper includes a summary of transport equations and boundary conditions...

  6. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

    Science.gov (United States)

    Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

    2015-01-01

    Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

  7. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  8. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a class...... during the economic crisis 2007–2009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other....... of universal approximators and may be expected to work well during exceptional periods such as major economic crises. These models are often difficult to estimate, and we follow the idea of White (2006) to transform the speci…cation and nonlinear estimation problem into a linear model selection and estimation...

  9. Limitations in paleomagnetic data and modelling techniques and their impact on Holocene geomagnetic field models

    DEFF Research Database (Denmark)

    Panovska, S.; Korte, M.; Finlay, Chris

    2015-01-01

    in individual archeomagnetic data so that these data or models derived from them can be used for reliable initial relative paleointensity calibration and declination orientation in sediments. More work will be needed to assess whether co-estimation or an iterative approach to inversion is more efficient overall......Characterization of geomagnetic field behaviour on timescales of centuries to millennia is necessary to understand the mechanisms that sustain the geodynamo and drive its evolution. As Holocene paleomagnetic and archeomagnetic data have become more abundant, strategies for regularized inversion...... of modern field data have been adapted to produce numerous timevarying global field models. We evaluate the effectiveness of several approaches to inversion and data handling, by assessing both global and regional properties of the resulting models. Global Holocene field models cannot resolve Southern...

  10. Development of pathological anthropomorphic models using 3D modelling techniques for numerical dosimetry

    International Nuclear Information System (INIS)

    Costa, Kleber Souza Silva; Barbosa, Antonio Konrado de Santana; Vieira, Jose Wilson; Lima, Fernando Roberto de Andrade

    2011-01-01

    Computational exposure models can be used to estimate human body absorbed dose in a series of situations such as X-Ray exams for diagnosis, accidents and medical treatments. These models are fundamentally composed of an anthropomorphic simulator (phantom), an algorithm that simulates a radioactive source and a Monte Carlo Code. The accuracy of data obtained in the simulation is strongly connected to the adequacy of such simulation to the real situation. The phantoms are one of the key factors for the researcher manipulation. They are generally developed in supine position and its anatomy is patronized by compiled data from international institutions such as ICRP or ICRU. Several pathologies modify the structure of organs and body tissues. In order to measure how significant these alterations are, an anthropomorphic model was developed for this study: patient mastectomies. This model was developed using voxel phantom FASH and then coupled with EGSnrc Monte Carlo code

  11. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  12. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  13. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  14. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  15. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  16. Soil temperature modeling at different depths using neuro-fuzzy, neural network, and genetic programming techniques

    Science.gov (United States)

    Kisi, Ozgur; Sanikhani, Hadi; Cobaner, Murat

    2017-08-01

    The applicability of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and genetic programming (GP) techniques in estimating soil temperatures (ST) at different depths is investigated in this study. Weather data from two stations, Mersin and Adana, Turkey, were used as inputs to the applied models in order to model monthly STs. The first part of the study focused on comparison of ANN, ANFIS, and GP models in modeling ST of two stations at the depths of 10, 50, and 100 cm. GP was found to perform better than the ANN and ANFIS-SC in estimating monthly ST. The effect of periodicity (month of the year) on models' accuracy was also investigated. Including periodicity component in models' inputs considerably increased their accuracies. The root mean square error (RMSE) of ANN models was respectively decreased by 34 and 27 % for the depths of 10 and 100 cm adding the periodicity input. In the second part of the study, the accuracies of the ANN, ANFIS, and GP models were compared in estimating ST of Mersin Station using the climatic data of Adana Station. The ANN models generally performed better than the ANFIS-SC and GP in modeling ST of Mersin Station without local climatic inputs.

  17. Comparing univariate techniques for tender price index forecasting: Box-Jenkins and neural network model

    Directory of Open Access Journals (Sweden)

    Olalekan Oshodi

    2017-09-01

    Full Text Available The poor performance of projects is a recurring event in the construction sector. Information gleaned from literature shows that uncertainty in project cost is one of the significant causes of this problem. Reliable forecast of construction cost is useful in mitigating the adverse effect of its fluctuation, however the availability of data for the development of multivariate models for construction cost forecasting remains a challenge. The study seeks to investigate the reliability of using univariate models for tender price index forecasting. Box-Jenkins and neural network are the modelling techniques applied in this study. The results show that the neural network model outperforms the Box-Jenkins model, in terms of accuracy. In addition, the neural network model provides a reliable forecast of tender price index over a period of 12 quarters ahead. The limitations of using the univariate models are elaborated. The developed neural network model can be used by stakeholders as a tool for predicting the movements in tender price index. In addition, the univariate models developed in the present study are particularly useful in countries where limited data reduces the possibility of applying multivariate models.

  18. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  19. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Naets, Frank

    2018-01-01

    performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during...... the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis......-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system....

  20. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two ....... These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  1. Modeling and Control PV-Wind Hybrid System Based On Fuzzy Logic Control Technique

    Directory of Open Access Journals (Sweden)

    Doaa M. Atia

    2012-09-01

    Full Text Available As energy demands around the world increase, the need for a renewable energy sources that will not harm the environment is increased. The overall objective of renewable energy systems is to obtain electricity that is cost competitive and even advantageous with respect to other energy sources. The optimal design of the renewable energy system can significantly improve the economical and technical performance of power supply. This paper presents the power management control using fuzzy logic control technique. Also, a complete mathematical modeling and MATLAB SIMULINK model for the proposed the electrical part of an aquaculture system is implemented to track the system performance. The simulation results show that, the feasibility of control technique.

  2. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  3. Review on discretization techniques for complex fluid flow models: past, present and future

    Science.gov (United States)

    Ammar, A.; Chinesta, F.; Cueto, E.; Phillips, T.

    2007-04-01

    In the last decades several new and advanced numerical strategies have been proposed for solving the flow models of complex fluids. Most of them were based in the classical discretization techniques (finite elements, finite volumes, finite differences, spectral methods, meshless approachesĚ) applied on the macroscopic descriptions of such flows (differential and integral models) where special advances were introduced for accounting for the mixed character of the associated variational formulations as well as for stabilizing the advection terms in the motion and constitutive equations. Recently micro-macro approaches are being the more and more applied. They allows to avoid closure relations and the microscopic physics are better described. These models are based on kinetic theory and their main difficulty concerns the curse of dimension. The microstructure conformation is defined in a multidimensional space where standard discretization techniques fail. To overcome this difficulty stochastic techniques were introduced (inspired in the Monte Carlo techniques) but the control of the statistical noise and the low convergence order are some of their main drawbacks. Other new strategies have been recently proposed, as for example the ones based on the sparse grid and the separated representation that allows circumventing the aforementioned difficulties. However the models are the more and more focused on the microscopic scale, where they are formulated in terms of Brownian or molecular dynamics. They allow describing very precisely the molecular dynamics, but the computing time remains its main drawback. Thus, in the next years new efforts must be paid to reduce the computing time involved in microscopic simulations and the definitions of bridges between the different descriptions scales.

  4. Modelling in pinnacle for distance extended source-patient and verification with film EBT2 technique

    International Nuclear Information System (INIS)

    Perucha Ortega, M.; Luis simon, J.; Rodriguez Alarcon, C.; Baeza Trujillo, M.; Sanchez Carmona, G.; Vicente Granado, D.; Gutierrez Ramos, S.; Herrador Cordoba, M.

    2013-01-01

    The objective of this work is modelled on the Pinnacle Scheduler the geometry used in our Center for the technique of Total body irradiation which consists of radiate to the patient, whose middle line is 366 cm from the source, in positions lateral decubitus, with 2 fields anteroposterior of 40 x 40 cm 2 , rotated collimator 45 degree centigrade interposing a screen of methacrylate 1 cm thick to 29 cm ahead of the middle line. (Author)

  5. Configuring Simulation Models Using CAD Techniques: A New Approach to Warehouse Design

    OpenAIRE

    Brito, A. E. S. C.

    1992-01-01

    The research reported in this thesis is related to the development and use of software tools for supporting warehouse design and management. Computer Aided Design and Simulation techniques are used to develop a software system that forms the basis of a Decision Support System for warehouse design. The current position of simulation software is reviewed. It is investigated how appropriate current simulation software is for warehouse modelling. Special attention is given to Vi...

  6. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-10-01

    Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a

  7. A finite element model updating technique for adjustment of parameters near boundaries

    Science.gov (United States)

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  8. Optimization models and techniques for implementation and pricing of electricity markets

    Science.gov (United States)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  9. Synopsis of Soft Computing Techniques used in Quadrotor UAV Modelling and Control

    Directory of Open Access Journals (Sweden)

    Attila Nemes

    2015-01-01

    Full Text Available The aim of this article is to give an introduction to quadrotor systems with an overview of soft computing techniques used in quadrotor unmanned aerial vehicle (UAV control, modelling, object following and collision avoidance. The quadrotor system basics, its structure and dynamic model definitions are recapitulated. Further on synopsis is given of previously proposed methods, results evaluated and conclusions drown by authors of referenced publications. The result of this article is a summary of multiple papers on fuzzy logic techniques used in position and altitude control systems for UAVs. Also an overview of fuzzy system based visual servoing for object tracking and collision avoidance is given together with a briefing of quadrotor UAV control techniques efficiency study. Conclusion is that though soft computing methods are widely used with good results, there is still place for much research to be done on find more efficient soft computing tools for simple modelling, robust dynamic control and fast collision avoidance in quadrotor UAV control.

  10. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses.

  11. Model-driven engineering of information systems principles, techniques, and practice

    CERN Document Server

    Cretu, Liviu Gabriel

    2015-01-01

    Model-driven engineering (MDE) is the automatic production of software from simplified models of structure and functionality. It mainly involves the automation of the routine and technologically complex programming tasks, thus allowing developers to focus on the true value-adding functionality that the system needs to deliver. This book serves an overview of some of the core topics in MDE. The volume is broken into two sections offering a selection of papers that helps the reader not only understand the MDE principles and techniques, but also learn from practical examples. Also covered are the

  12. System-Level Validation High-Level Modeling and Directed Test Generation Techniques

    CERN Document Server

    Chen, Mingsong; Koo, Heon-Mo; Mishra, Prabhat

    2013-01-01

    This book covers state-of-the art techniques for high-level modeling and validation of complex hardware/software systems, including those with multicore architectures.  Readers will learn to avoid time-consuming and error-prone validation from the comprehensive coverage of system-level validation, including high-level modeling of designs and faults, automated generation of directed tests, and efficient validation methodology using directed tests and assertions.  The methodologies described in this book will help designers to improve the quality of their validation, performing as much validation as possible in the early stages of the design, while reducing the overall validation effort and cost.

  13. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1978-01-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  14. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1979-02-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  15. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  16. The use of anatomical models for learning anesthesia techniques in oral surgery

    Directory of Open Access Journals (Sweden)

    JVS Canellas

    2013-01-01

    Full Text Available Aim: The objective of this work is to present a new collaborative method for teaching administration of anesthetic block in dentistry, with three-dimensional anatomical models used to improve learning and thereby increase safety, reduce anxiety, and improve the performance of students during the administration of anesthesia in the patients. Materials and Methods: Three-dimensional (3D models of skulls were made that reproduced all innervations of the V th cranial nerve (trigeminal nerve, as well as some blood vessels, glands, and muscles of mastication. For teaching the local anesthetic techniques we prepared pictures and videos of the administration of anesthesia in the models , which were presented to 130 students in two universities in Brazil. With the help of the models the students could follow the path of the nerves to be anesthetized and identify the anatomical points of reference for the correct positioning of the needle in the tissues. After the presentation the students answered a questionnaire aiming to assess the effect of the 3D models on learning. Results: Eighty-eight percent of students rated the material as excellent, 12% as good, 0% as regular, and 0% as bad (unnecessary materials. After the presentation, 70% of the students felt confident about being able to achieve the nerve block in patients. Conclusion: When exposed to an appropriate method, students recognized the importance of knowledge of anatomy for learning local anesthetic techniques. This method improved the quality of education and increased patient safety during the first injection.

  17. A Simple Technique For Visualising Three Dimensional Models in Landscape Contexts

    Directory of Open Access Journals (Sweden)

    Stuart Jeffrey

    2001-05-01

    Full Text Available One of the Scottish Early Medieval Sculptured Stones project (SEMSS project's objectives is to generate accurate three dimensional models of these monuments using a variety of data capture techniques from photogrammetry to Time of Flight laser measurement. As the landscape context of these monuments is often considered crucial to their understanding, the model's ultimate presentation to the user should include some level of contextual information. In addition there are a number of presentation issues that must be considered such as interactivity, the relationship of reconstructed to non-reconstructed sections, lighting and suitability for presentation over the WWW. This article discusses the problem of presenting three dimensional models of monumental stones in their landscape contexts. This problem is discussed in general, but special attention is paid to the difficulty of capturing landscape detail,interactivity, reconstructing landscapes and providing accurate representations of landscapes to the horizon. Comparison is made between 3D modelling packages and Internet specific presentation formats such as VRML and QTVR. The proposed technique provides some level of interactivity as well as photorealistic landscape representation extended to the horizon, without the need for a complete DEM/DTM, thereby making file sizes manageable and capable of WWW presentation. It also allows for the issues outlined to be tackled in a more efficient manner than by using either 3D modelling or QTVR on their own.

  18. Hybrid OPC modeling with SEM contour technique for 10nm node process

    Science.gov (United States)

    Hitomi, Keiichiro; Halle, Scott; Miller, Marshal; Graur, Ioana; Saulnier, Nicole; Dunn, Derren; Okai, Nobuhiro; Hotta, Shoji; Yamaguchi, Atsuko; Komuro, Hitoshi; Ishimoto, Toru; Koshihara, Shunsuke; Hojo, Yutaka

    2014-03-01

    Hybrid OPC modeling is investigated using both CDs from 1D and simple 2D structures and contours extracted from complex 2D structures, which are obtained by a Critical Dimension-Scanning Electron Microscope (CD-SEM). Recent studies have addressed some of key issues needed for the implementation of contour extraction, including an edge detection algorithm consistent with conventional CD measurements, contour averaging and contour alignment. Firstly, pattern contours obtained from CD-SEM images were used to complement traditional site driven CD metrology for the calibration of OPC models for both metal and contact layers of 10 nm-node logic device, developed in Albany Nano-Tech. The accuracy of hybrid OPC model was compared with that of conventional OPC model, which was created with only CD data. Accuracy of the model, defined as total error root-mean-square (RMS), was improved by 23% with the use of hybrid OPC modeling for contact layer and 18% for metal layer, respectively. Pattern specific benefit of hybrid modeling was also examined. Resist shrink correction was applied to contours extracted from CD-SEM images in order to improve accuracy of the contours, and shrink corrected contours were used for OPC modeling. The accuracy of OPC model with shrink correction was compared with that without shrink correction, and total error RMS was decreased by 0.2nm (12%) with shrink correction technique. Variation of model accuracy among 8 modeling runs with different model calibration patterns was reduced by applying shrink correction. The shrink correction of contours can improve accuracy and stability of OPC model.

  19. Analysis and optimization of a proton exchange membrane fuel cell using modeling techniques

    International Nuclear Information System (INIS)

    Torre Valdés, Ing. Raciel de la; García Parra, MSc. Lázaro Roger; González Rodríguez, MSc. Daniel

    2015-01-01

    This paper proposes a three-dimensional, non-isothermal and steady-state model of Proton Exchange Membrane Fuel Cell using Computational Fluid Dynamic techniques, specifically ANSYS FLUENT 14.5. It's considered multicomponent diffusion and two-phasic flow. The model was compared with experimental published data and with another model. The operation parameters: reactants pressure and temperature, gases flow direction, gas diffusion layer and catalyst layer porosity, reactants humidification and oxygen concentration are analyzed. The model allows the fuel cell design optimization taking in consideration the channels dimensions, the channels length and the membrane thickness. Furthermore, fuel cell performance is analyzed working with SPEEK membrane, an alternative electrolyte to Nafion. In order to carry on membrane material study, it's necessary to modify the expression that describes the electrolyte ionic conductivity. It's found that the device performance has got a great sensibility to pressure, temperature, reactant humidification and oxygen concentration variations. (author)

  20. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  1. On the Reliability of Nonlinear Modeling using Enhanced Genetic Programming Techniques

    Science.gov (United States)

    Winkler, S. M.; Affenzeller, M.; Wagner, S.

    The use of genetic programming (GP) in nonlinear system identification enables the automated search for mathematical models that are evolved by an evolutionary process using the principles of selection, crossover and mutation. Due to the stochastic element that is intrinsic to any evolutionary process, GP cannot guarantee the generation of similar or even equal models in each GP process execution; still, if there is a physical model underlying to the data that are analyzed, then GP is expected to find these structures and produce somehow similar results. In this paper we define a function for measuring the syntactic similarity of mathematical models represented as structure trees; using this similarity function we compare the results produced by GP techniques for a data set representing measurement data of a BMW Diesel engine.

  2. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...... Scandinavian ones, and focus on forecasting during the economic crisis 2007–2009. The forecast accuracy is measured using the root mean square forecast error. Hypothesis testing is also used to compare the performances of the different techniques....... that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem...

  3. Development Model of Basic Technique Skills Training Shot-Put Obrien Style Based Biomechanics Review

    Directory of Open Access Journals (Sweden)

    danang rohmat hidayanto

    2018-03-01

    Full Text Available The background of this research is the unavailability of learning model of basic technique technique of O'Brien style force that integrated in skill program based on biomechanics study which is used as a reference to build the basic technique skill of the O'Brien style force among students. The purpose of this study is to develop a model of basic-style technique of rejecting the O'Brien-style shot put based on biomechanical studies for beginner levels, including basic prefix technique, glide, final stage, repulsion, further motion and repulsion performance of O'Brien style, all of which arranged in a medium that is easily accessible whenever, by anyone and anywhere, especially in SMK Negeri 1 Kalijambe Sragen . The research method used is "Reasearch and Developement" approach. "Preliminary studies show that 43.0% of respondents considered that the O'Brien style was very important to be developed with a model of skill-based exercise based on biomechanics, as many as 40.0% ressponden stated that it is important to be developed with biomechanics based learning media. Therefore, it is deemed necessary to develop the learning media of the O'Brien style-based training skills based on biomechanical studies. Development of media starts from the design of the storyboard and script form that will be used as media. The design of this model is called the draft model. Draft models that have been prepared are reviewed by the multimedia expert and the O'Brien style expert to get the product's validity. A total of 78.24% of experts declare a viable product with some input. In small groups with n = 6, earned value 72.2% was obtained or valid enough to be tested in large groups. In the large group test with n = 12,values obtained 70.83% or quite feasible to be tested in the field. In the field test, experimental group was prepared with treatment according to media and control group with free treatment. From result of counting of significance test can be

  4. Image acquisition and planimetry systems to develop wounding techniques in 3D wound model

    Directory of Open Access Journals (Sweden)

    Kiefer Ann-Kathrin

    2017-09-01

    Full Text Available Wound healing represents a complex biological repair process. Established 2D monolayers and wounding techniques investigate cell migration, but do not represent coordinated multi-cellular systems. We aim to use wound surface area measurements obtained from image acquisition and planimetry systems to establish our wounding technique and in vitro organotypic tissue. These systems will be used in our future wound healing treatment studies to assess the rate of wound closure in response to wound healing treatment with light therapy (photobiomodulation. The image acquisition and planimetry systems were developed, calibrated, and verified to measure wound surface area in vitro. The system consists of a recording system (Sony DSC HX60, 20.4 M Pixel, 1/2.3″ CMOS sensor and calibrated with 1mm scale paper. Macro photography with an optical zoom magnification of 2:1 achieves sufficient resolution to evaluate the 3mm wound size and healing growth. The camera system was leveled with an aluminum construction to ensure constant distance and orientation of the images. The JPG-format images were processed with a planimetry system in MATLAB. Edge detection enables definition of the wounded area. Wound area can be calculated with surface integrals. To separate the wounded area from the background, the image was filtered in several steps. Agar models, injured through several test persons with different levels of experience, were used as pilot data to test the planimetry software. These image acquisition and planimetry systems support the development of our wound healing research. The reproducibility of our wounding technique can be assessed by the variability in initial wound surface area. Also, wound healing treatment effects can be assessed by the change in rate of wound closure. These techniques represent the foundations of our wound model, wounding technique, and analysis systems in our ongoing studies in wound healing and therapy.

  5. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  6. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  7. Development of new techniques for assimilating satellite altimetry data into ocean models

    Science.gov (United States)

    Yu, Peng

    State of the art fully three-dimensional ocean models are very computationally expensive and their adjoints are even more resource intensive. However, many features of interest are approximated by the first baroclinic mode over much of the ocean, especially in the lower and mid latitude regions. Based on this dynamical feature, a new type of data assimilation scheme to assimilate sea surface height (SSH) data, a reduced-space adjoint technique, is developed and implemented with a three-dimensional model using vertical normal mode decomposition. The technique is tested with the Navy Coastal Ocean Model (NCOM) configured to simulate the Gulf of Mexico. The assimilation procedure works by minimizing the cost function, which generalizes the misfit between the observations and their counterpart model variables. The "forward" model is integrated for the period during which the data are assimilated. Vertical normal mode decomposition retrieves the first baroclinic mode, and the data misfit between the model outputs and observations is calculated. Adjoint equations based on a one-active-layer reduced gravity model, which approximates the first baroclinic mode, are integrated backward in time to get the gradient of the cost function with respect to the control variables (velocity and SSH of the first baroclinic mode). The gradient is input to an optimization algorithm (the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used for the cases presented here) to determine the new first baroclinic mode velocity and SSH fields, which are used to update the forward model variables at the initial time. Two main issues in the area of ocean data assimilation are addressed: (1) How can information provided only at the sea surface be transferred dynamically into deep layers? (2) How can information provided only locally, in limited oceanic regions, be horizontally transferred to ocean areas far away from the data-dense regions, but dynamically connected to it? The first

  8. A parametric model order reduction technique for poroelastic finite element models.

    Science.gov (United States)

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  9. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  10. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  11. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  12. Fault Tree Analysis with Temporal Gates and Model Checking Technique for Qualitative System Safety Analysis

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2010-01-01

    Fault tree analysis (FTA) has suffered from several drawbacks such that it uses only static gates and hence can not capture dynamic behaviors of the complex system precisely, and it is in lack of rigorous semantics, and reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and time-consuming for the complex systems while it has been one of the most widely used safety analysis technique in nuclear industry. Although several attempts have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA

  13. Viable Techniques, Leontief’s Closed Model, and Sraffa’s Subsistence Economies

    Directory of Open Access Journals (Sweden)

    Alberto Benítez

    2014-11-01

    Full Text Available This paper studies the production techniques employed in economies that reproduce themselves. Special attention is paid to the distinction usually made between those that do not produce a surplus and those that do, which are referred to as first and second class economies, respectively. Based on this, we present a new definition of viable economies and show that every viable economy of the second class can be represented as a viable economy of the first class under two different forms, Leontief‘s closed model and Sraffa’s subsistence economies. This allows us to present some remarks concerning the economic interpretation of the two models. On the one hand, we argue that the participation of each good in the production of every good can be considered as a normal characteristic of the first model and, on the other hand, we provide a justification for the same condition to be considered a characteristic of the second model. Furthermore, we discuss three definitions of viable techniques advanced by other authors and show that they differ from ours because they admit economies that do not reproduce themselves completely.

  14. A Comparison of Intensive Care Unit Mortality Prediction Models through the Use of Data Mining Techniques.

    Science.gov (United States)

    Kim, Sujin; Kim, Woojae; Park, Rae Woong

    2011-12-01

    The intensive care environment generates a wealth of critical care data suited to developing a well-calibrated prediction tool. This study was done to develop an intensive care unit (ICU) mortality prediction model built on University of Kentucky Hospital (UKH)'s data and to assess whether the performance of various data mining techniques, such as the artificial neural network (ANN), support vector machine (SVM) and decision trees (DT), outperform the conventional logistic regression (LR) statistical model. The models were built on ICU data collected regarding 38,474 admissions to the UKH between January 1998 and September 2007. The first 24 hours of the ICU admission data were used, including patient demographics, admission information, physiology data, chronic health items, and outcome information. Only 15 study variables were identified as significant for inclusion in the model development. The DT algorithm slightly outperformed (AUC, 0.892) the other data mining techniques, followed by the ANN (AUC, 0.874), and SVM (AUC, 0.876), compared to that of the APACHE III performance (AUC, 0.871). With fewer variables needed, the machine learning algorithms that we developed were proven to be as good as the conventional APACHE III prediction.

  15. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ma

    2016-01-01

    Full Text Available To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and spectral nudging are debated. Moreover, dynamical downscaling is now performed at the convection-permitting scale to reduce the parameterization uncertainty and obtain the finer resolution. To compare the performances of the two nudging techniques in this study, three sensitivity experiments (with no nudging, analysis nudging, and spectral nudging covering a period of two months with a grid spacing of 6 km over continental China are conducted to downscale the 1-degree National Centers for Environmental Prediction (NCEP dataset with the Weather Research and Forecasting (WRF model. Compared with observations, the results show that both of the nudging experiments decrease the bias of conventional meteorological elements near the surface and at different heights during the process of dynamical downscaling. However, spectral nudging outperforms analysis nudging for predicting precipitation, and analysis nudging outperforms spectral nudging for the simulation of air humidity and wind speed.

  16. Design, modeling, and fabrication techniques of bulk PZT actuators for MEMS deformable mirrors

    Science.gov (United States)

    Xu, Xiaohui; Chu, Jiaru

    2007-12-01

    The paper describes the design, modeling and fabrication techniques of bulk PZT actuators for MEMS deformable mirrors. Both the analytical model and finite element method are employed for performance simulation and structure optimization of the bulk PZT actuator. According to the simulation results, thick PZT films with high d 31 piezoelectric coefficient are necessary for the deformable mirrors to obtain both high stiffness and large stroke at low voltage for applications in astronomical observation and retina imaging. The fabrication techniques for bulk PZT actuators for MEMS deformable mirrors are investigated, incorporating the bonding of bulk PZT ceramics to Si single crystals with epoxy resin, the thinning and patterning of bulk PZT ceramics using wet-etching method. 1BHF:2HCl:4NH 4Cl:4H IIO solution was used as the etchant. Using the fabrication techniques, we have successfully demonstrated a 4×4 prototype array of 2.5mm-diameter bulk PZT actuators for MEMS deformable mirrors. The bulk PZT actuators show a stroke of 3 μm at +/-25V and displacement hysteresis of 15%. The displacement hysteresis was largely eliminated by using the method of staying on the same segment.

  17. The efficacy of hemostatic techniques in the sheep model of carotid artery injury.

    Science.gov (United States)

    Valentine, Rowan; Boase, Sam; Jervis-Bardy, Josh; Dones Cabral, Jay-Dee; Robinson, Simon; Wormald, Peter-John

    2011-01-01

    The most dramatic complication in endonasal surgery is inadvertent injury to the internal carotid artery (ICA) with massive bleeding. Nasal packing is the favored technique for control; however, this often causes complete carotid occlusion or carotid stenosis, contributing to the morbidity and mortality of the patient. The aim of this study is to compare the efficacy of endoscopically applied hemostatic techniques that maintain vascular flow in an animal model of carotid artery injury. A total of 20 sheep underwent ICA dissection/isolation followed by the placement of the artery within a modified "sinus model otorhino neuro trainer" (SIMONT) model. A standardized 4-mm carotid artery injury was created endoscopically. Randomization of sheep to receive 1 of 5 hemostatic techniques was performed (Floseal, oxidized regenerated cellulose, Chitosan gel, muscle patch, or the U-Clip anastomotic device). Specific outcome measures were time to hemostasis, duration of time mean arterial pressure (MAP) was >55 mmHg, blood loss, and survival time. Muscle patch hemostasis and the U-Clip anastomotic device were significantly more effective at achieving primary hemostasis rapidly, reducing total blood loss, and increasing survival time and time MAP was >55 mmHg more than Floseal, oxidized regenerated cellulose, and Chitosan gel (p sheep achieved primary hemostasis and reached the endpoint of observation, while maintaining vascular patency. Floseal and oxidized regenerated cellulose failed to achieve hemostasis in any animal, with all animals exsanguinating prematurely. In the sheep model of endoscopic ICA injury, the muscle patch and U-Clip anastomotic device significantly improved survival, reduced blood loss, and achieved primary hemostasis while maintaining vascular patency. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  18. Wind Turbine Tower Vibration Modeling and Monitoring by the Nonlinear State Estimation Technique (NSET

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2012-12-01

    Full Text Available With appropriate vibration modeling and analysis the incipient failure of key components such as the tower, drive train and rotor of a large wind turbine can be detected. In this paper, the Nonlinear State Estimation Technique (NSET has been applied to model turbine tower vibration to good effect, providing an understanding of the tower vibration dynamic characteristics and the main factors influencing these. The developed tower vibration model comprises two different parts: a sub-model used for below rated wind speed; and another for above rated wind speed. Supervisory control and data acquisition system (SCADA data from a single wind turbine collected from March to April 2006 is used in the modeling. Model validation has been subsequently undertaken and is presented. This research has demonstrated the effectiveness of the NSET approach to tower vibration; in particular its conceptual simplicity, clear physical interpretation and high accuracy. The developed and validated tower vibration model was then used to successfully detect blade angle asymmetry that is a common fault that should be remedied promptly to improve turbine performance and limit fatigue damage. The work also shows that condition monitoring is improved significantly if the information from the vibration signals is complemented by analysis of other relevant SCADA data such as power performance, wind speed, and rotor loads.

  19. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  20. Dynamic document object model formation technique for corporate website protection against automatic coping of information

    Directory of Open Access Journals (Sweden)

    Galushka Vasily

    2017-01-01

    Full Text Available The article describes solution path of the problem of information automatic copying from web-sites in the Internet, which is implemented using parsing techniques based on regular expressions or function libraries. To protecting against this type of information security threat, it is proposed to dynamically generate and periodically change the object model of the HTML document when generated and sent to the browser. These changes should affect the values of the identifying tag attributes and the structure of the object model tree. As attribute values it is offered to use character sets of limited length obtained as a result of random numbers hashing; change of the structure of the object model should be done by adding of additional tags at the corresponding levels of the hierarchy of the tree representing it. The simultaneous application of these methods excludes the possibility of algorithm compile for the necessary information extraction in the overall structure of the web page.

  1. Development og groundwater flow modeling techniques for the low-level radwaste disposal (III)

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Dae-Seok; Kim, Chun-Soo; Kim, Kyung-Soo; Park, Byung-Yoon; Koh, Yong-Kweon; Park, Hyun-Soo [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-12-01

    The project amis to establish the methodology of hydrogeologic assessment by the field application of the evaluation techniques gained and accumulated from the previous hydrogeological research works in Korea. The results of the project and their possible areas for application are (1) acquisition of detailed hydrogeologic information by using a borehole televiewer and a multipacker system, (2) establishing an integrated hydrogeological assessment method for fractured rocks, (3) acquisition of the fracture parameters for fracture modeling, (4) an inversion analysis of hydraulic parameters from fracture network modeling, (5) geostatistical methods for the spatial assignment of hydraulic parameters for fractured rocks, and (6) establishing the groundwater flow modeling procedure for a repository. 75 refs., 72 figs., 34 tabs. (Author)

  2. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    Oliveira, Mauro V.; Schirru, Roberto

    2000-01-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  3. Cleanup techniques for Finnish urban environments and external doses from 137Cs - modelling and calculations

    International Nuclear Information System (INIS)

    Moring, M.; Markkula, M.L.

    1997-03-01

    The external doses under various radioactive deposition conditions are assessed and the efficiencies of some simple decontamination techniques (grass cutting, vacuum sweeping, hosing of paved surfaces and roofs, and felling trees) are compared in the study. The present model has been constructed for the Finnish conditions and housing areas, using 137 Cs transfer data from the Nordic and Central European studies and models. The compartment model concerns behaviour and decontamination of 137 Cs in the urban environment under summer conditions. Doses to man have been calculated for wet (light rain) and dry deposition in four typical Finnish building areas: single-family wooden houses, brick terraced-houses, blocks of flats and urban office buildings. (26 refs.)

  4. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  5. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  6. Sterile insect technique: A model for dose optimisation for improved sterile insect quality

    International Nuclear Information System (INIS)

    Parker, A.; Mehta, K.

    2007-01-01

    The sterile insect technique (SIT) is an environment-friendly pest control technique with application in the area-wide integrated control of key pests, including the suppression or elimination of introduced populations and the exclusion of new introductions. Reproductive sterility is normally induced by ionizing radiation, a convenient and consistent method that maintains a reasonable degree of competitiveness in the released insects. The cost and effectiveness of a control program integrating the SIT depend on the balance between sterility and competitiveness, but it appears that current operational programs with an SIT component are not achieving an appropriate balance. In this paper we discuss optimization of the sterilization process and present a simple model and procedure for determining the optimum dose. (author) [es

  7. A Temporal Millimeter Wave Propagation Model for Tunnels Using Ray Frustum Techniques and FFT

    Directory of Open Access Journals (Sweden)

    Choonghyen Kwon

    2014-01-01

    Full Text Available A temporal millimeter wave propagation model for tunnels is presented using ray frustum techniques and fast Fourier transform (FFT. To directly estimate or simulate effects of millimeter wave channel properties on the performance of communication services, time domain impulse responses of demodulated signals should be obtained, which needs rather large computation time. To mitigate the computational burden, ray frustum techniques are used to obtain frequency domain transfer function of millimeter wave propagation environment and FFT of equivalent low pass signals are used to retrieve demodulated waveforms. This approach is numerically efficient and helps to directly estimate impact of tunnel structures and surfaces roughness on the performance of millimeter wave communication services.

  8. Wide-area Power System Oscillation Damping using Model Predictive Control Technique

    Science.gov (United States)

    Mohamed, Tarek Hassan; Abdel-Rahim, Abdel-Moamen Mohammed; Hassan, Ahmed Abd-Eltawwab; Hiyama, Takashi

    This paper presents a new approach to deal with the problem of robust tuning of power system stabilizer (PSS) and automatic voltage regulator (AVR) in multi-machine power systems. The proposed method is based on a model predictive control (MPC) technique, for improvement stability of the wide-area power system with multiple generators and distribution systems including dispersed generations. The proposed method provides better damping of power system oscillations under small and large disturbances even with the inclusion of local PSSs. The effectiveness of the proposed approach is demonstrated through a two areas, four machines power system. A performance comparison between the proposed controller and some of other controllers is carried out confirming the superiority of the proposed technique. It has also been observed that the proposed algorithm can be successfully applied to larger multiarea power systems and do not suffer with computational difficulties. The proposed algorithm carried out using MATLAB/SIMULINK software package.

  9. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    Science.gov (United States)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  10. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  11. A comparison of modelling techniques for computing wall stress in abdominal aortic aneurysms

    Directory of Open Access Journals (Sweden)

    McGloughlin Timothy M

    2007-10-01

    Full Text Available Abstract Background Aneurysms, in particular abdominal aortic aneurysms (AAA, form a significant portion of cardiovascular related deaths. There is much debate as to the most suitable tool for rupture prediction and interventional surgery of AAAs, and currently maximum diameter is used clinically as the determining factor for surgical intervention. Stress analysis techniques, such as finite element analysis (FEA to compute the wall stress in patient-specific AAAs, have been regarded by some authors to be more clinically important than the use of a "one-size-fits-all" maximum diameter criterion, since some small AAAs have been shown to have higher wall stress than larger AAAs and have been known to rupture. Methods A patient-specific AAA was selected from our AAA database and 3D reconstruction was performed. The AAA was then modelled in this study using three different approaches, namely, AAA(SIMP, AAA(MOD and AAA(COMP, with each model examined using linear and non-linear material properties. All models were analysed using the finite element method for wall stress distributions. Results Wall stress results show marked differences in peak wall stress results between the three methods. Peak wall stress was shown to reduce when more realistic parameters were utilised. It was also noted that wall stress was shown to reduce by 59% when modelled using the most accurate non-linear complex approach, compared to the same model without intraluminal thrombus. Conclusion The results here show that using more realistic parameters affect resulting wall stress. The use of simplified computational modelling methods can lead to inaccurate stress distributions. Care should be taken when examining stress results found using simplified techniques, in particular, if the wall stress results are to have clinical importance.

  12. Repositioning the knee joint in human body FE models using a graphics-based technique.

    Science.gov (United States)

    Jani, Dhaval; Chawla, Anoop; Mukherjee, Sudipto; Goyal, Rahul; Vusirikala, Nataraju; Jayaraman, Suresh

    2012-01-01

    Human body finite element models (FE-HBMs) are available in standard occupant or pedestrian postures. There is a need to have FE-HBMs in the same posture as a crash victim or to be configured in varying postures. Developing FE models for all possible positions is not practically viable. The current work aims at obtaining a posture-specific human lower extremity model by reconfiguring an existing one. A graphics-based technique was developed to reposition the lower extremity of an FE-HBM by specifying the flexion-extension angle. Elements of the model were segregated into rigid (bones) and deformable components (soft tissues). The bones were rotated about the flexion-extension axis followed by rotation about the longitudinal axis to capture the twisting of the tibia. The desired knee joint movement was thus achieved. Geometric heuristics were then used to reposition the skin. A mapping defined over the space between bones and the skin was used to regenerate the soft tissues. Mesh smoothing was then done to augment mesh quality. The developed method permits control over the kinematics of the joint and maintains the initial mesh quality of the model. For some critical areas (in the joint vicinity) where element distortion is large, mesh smoothing is done to improve mesh quality. A method to reposition the knee joint of a human body FE model was developed. Repositions of a model from 9 degrees of flexion to 90 degrees of flexion in just a few seconds without subjective interventions was demonstrated. Because the mesh quality of the repositioned model was maintained to a predefined level (typically to the level of a well-made model in the initial configuration), the model was suitable for subsequent simulations.

  13. A discussion of calibration techniques for evaluating binary and categorical predictive models.

    Science.gov (United States)

    Fenlon, Caroline; O'Grady, Luke; Doherty, Michael L; Dunnion, John

    2018-01-01

    Modelling of binary and categorical events is a commonly used tool to simulate epidemiological processes in veterinary research. Logistic and multinomial regression, naïve Bayes, decision trees and support vector machines are popular data mining techniques used to predict the probabilities of events with two or more outcomes. Thorough evaluation of a predictive model is important to validate its ability for use in decision-support or broader simulation modelling. Measures of discrimination, such as sensitivity, specificity and receiver operating characteristics, are commonly used to evaluate how well the model can distinguish between the possible outcomes. However, these discrimination tests cannot confirm that the predicted probabilities are accurate and without bias. This paper describes a range of calibration tests, which typically measure the accuracy of predicted probabilities by comparing them to mean event occurrence rates within groups of similar test records. These include overall goodness-of-fit statistics in the form of the Hosmer-Lemeshow and Brier tests. Visual assessment of prediction accuracy is carried out using plots of calibration and deviance (the difference between the outcome and its predicted probability). The slope and intercept of the calibration plot are compared to the perfect diagonal using the unreliability test. Mean absolute calibration error provides an estimate of the level of predictive error. This paper uses sample predictions from a binary logistic regression model to illustrate the use of calibration techniques. Code is provided to perform the tests in the R statistical programming language. The benefits and disadvantages of each test are described. Discrimination tests are useful for establishing a model's diagnostic abilities, but may not suitably assess the model's usefulness for other predictive applications, such as stochastic simulation. Calibration tests may be more informative than discrimination tests for evaluating

  14. A novel 3D modelling and simulation technique in thermotherapy predictive analysis on biological tissue

    Science.gov (United States)

    Fanjul-Vélez, F.; Arce-Diego, J. L.; Romanov, Oleg G.; Tolstik, Alexei L.

    2007-07-01

    Optical techniques applied to biological tissue allow the development of new tools in medical praxis, either in tissue characterization or treatment. Examples of the latter are Photodynamic Therapy (PDT) or Low Intensity Laser Treatment (LILT), and also a promising technique called thermotherapy, that tries to control temperature increase in a pathological tissue in order to reduce or even eliminate pathological effects. The application of thermotherapy requires a previous analysis in order to avoid collateral damage to the patient, and also to choose the appropriate optical source parameters. Among different implementations of opto-thermal models, the one we use consists of a three dimensional Beer-Lambert law for the optical part, and a bio-heat equation, that models heat transference, conduction, convection, radiation, blood perfusion and vaporization, solved via a numerical spatial-temporal explicit finite difference approach, for the thermal part. The usual drawback of the numerical method of the thermal model is that convergence constraints make spatial and temporal steps very small, with the natural consequence of slow processing. In this work, a new algorithm implementation is used for the bio-heat equation solution, in such a way that the simulation time decreases considerably. Thermal damage based on the Arrhenius integral damage is also considered.

  15. Comparing smoothing techniques in Cox models for exposure-response relationships.

    Science.gov (United States)

    Govindarajulu, Usha S; Spiegelman, Donna; Thurston, Sally W; Ganguli, Bhaswati; Eisen, Ellen A

    2007-09-10

    To allow for non-linear exposure-response relationships, we applied flexible non-parametric smoothing techniques to models of time to lung cancer mortality in two occupational cohorts with skewed exposure distributions. We focused on three different smoothing techniques in Cox models: penalized splines, restricted cubic splines, and fractional polynomials. We compared standard software implementations of these three methods based on their visual representation and criterion for model selection. We propose a measure of the difference between a pair of curves based on the area between them, standardized by the average of the areas under the pair of curves. To capture the variation in the difference over the range of exposure, the area between curves was also calculated at percentiles of exposure and expressed as a percentage of the total difference. The dose-response curves from the three methods were similar in both studies over the denser portion of the exposure range, with the difference between curves up to the 50th percentile less than 1 per cent of the total difference. A comparison of inverse variance weighted areas applied to the data set with a more skewed exposure distribution allowed us to estimate area differences with more precision by reducing the proportion attributed to the upper 1 per cent tail region. Overall, the penalized spline and the restricted cubic spline were closer to each other than either was to the fractional polynomial. (c) 2007 John Wiley & Sons, Ltd.

  16. Determining and ranking dimensions of knowledge management implementation using Hicks model and fuzzy TOPSIS Technique

    Directory of Open Access Journals (Sweden)

    Mona Ahani

    2013-02-01

    Full Text Available The 20th century was the age of an industry-based as well as knowledge-based economy. In a knowledge-based economy, knowledge plays an essential role to produce wealth compared with other tangible and physical assets. The purpose of this research is to identify and rank different aspects of knowledge management based on the Hicks model using the fuzzy TOPSIS technique for one of the most prestigious universities in Iran. The proposed model considers four main criteria of knowledge including creation, distribution, storage, and application along with 17 sub-criteria. The Chi-square correlation test indicates a positive and meaningful correlation between four mentioned criteria and knowledge management implementation. Using the fuzzy TOPSIS technique, the results also indicate that “Need for new and updated information and knowledge” was selected as the most important sub-criterion and “Sharing or distribution of knowledge” was selected as the most important main criterion on Hicks model.

  17. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  18. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  19. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  20. Optic nerve sheath diameter measurement techniques: examination using a novel ex-vivo porcine model.

    Science.gov (United States)

    Nusbaum, Derek M; Antonsen, Erik; Bockhorst, Kurt H; Easley, R Blaine; Clark, Jonathan B; Brady, Kenneth M; Kibler, Kathleen K; Sutton, Jeffrey P; Kramer, Larry; Sargsyan, Ashot E

    2014-01-01

    Ultrasound (U/S) and MRI measurements of the optic nerve sheath diameter (ONSD) have been proposed as intracranial pressure measurement surrogates, but these methods have not been fully evaluated or standardized. The purpose of this study was to develop an ex-vivo model for evaluating ONSD measurement techniques by comparing U/S and MRI measurements to physical measurements. The left eye of post mortem juvenile pigs (N = 3) was excised and the subdural space of the optic nerve cannulated. Caliper measurements and U/S imaging measurements of the ONSD were acquired at baseline and following 1 cc saline infusion into the sheath. The samples were then embedded in 0.5% agarose and imaged in a 7 Tesla (7T) MRI. The ONSD was subsequently measured with digital calipers at locations and directions matching the U/S and direct measurements. Both MRI and sonographic measurements were in agreement with direct measurements. U/S data, especially axial images, exhibited a positive bias and more variance (bias: 1.318, 95% limit of agreement: 8.609) compared to MRI (bias: 0.3156, 95% limit of agreement: 2.773). In addition, U/S images were much more dependent on probe placement, distance between probe and target, and imaging plane. This model appears to be a valid test-bed for continued scrutiny of ONSD measurement techniques. In this model, 7T MRI was accurate and potentially useful for in-vivo measurements where direct measurements are not available. Current limitations with ultrasound imaging for ONSD measurement associated with image acquisition technique and equipment necessitate further standardization to improve its clinical utility.

  1. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  2. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Directory of Open Access Journals (Sweden)

    Yang-Cheng Lin

    2012-01-01

    Full Text Available How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers’ perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique, and neural networks (the nonlinear modeling technique to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers’ perception of product image and product form elements of personal digital assistants (PDAs. The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  3. Big data - modelling of midges in Europa using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Cuellar, Ana Carolina; Kjær, Lene Jung; Skovgaard, Henrik

    2017-01-01

    coordinates of each trap, start and end dates of trapping. We used 120 environmental predictor variables together with Random Forest machine learning algorithms to predict the overall species distribution (probability of occurrence) and monthly abundance in Europe. We generated maps for every month...... and the Obsoletus group, although abundance was generally higher for a longer period of time for C. imicula than for the Obsoletus group. Using machine learning techniques, we were able to model the spatial distribution in Europe for C. imicola and the Obsoletus group in terms of abundance and suitability...

  4. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  5. Dynamical symmetry breaking in the Jackiw-Johnson model and the gauge technique

    International Nuclear Information System (INIS)

    Singh, J.P.

    1984-01-01

    The Jackiw-Johnson model of dynamical gauge symmetry breaking has been re-examined in the light of the gauge technique. In the limit where the ratio of the axial to vector coupling constants becomes small, or, consistently, in the limit where the ratio of the axial-vector-boson mass to the fermion mass becomes small, an approximate solution for the fermion spectral function has been derived. This gives an extremely small ratio of the axial-vector-boson mass to the fermion mass. (author)

  6. Shape modeling technique KOALA validated by ESA Rosetta at (21) Lutetia

    Science.gov (United States)

    Carry, B.; Kaasalainen, M.; Merline, W. J.; Müller, T. G.; Jorda, L.; Drummond, J. D.; Berthier, J.; O'Rourke, L.; Ďurech, J.; Küppers, M.; Conrad, A.; Tamblyn, P.; Dumas, C.; Sierks, H.; Osiris Team (M. A'Hearn, F. Angrilli, C. Barbieri, A. Barucci, J.-L. Bertaux, G.Cremonese, V. Da Deppo, B. Davidsson, S. Debei, M. De Cecco, S. Fornasier, M. Fulle, O. Groussin, P. Gutiérrez, W.-H. Ip, S. Hviid, H.U. Keller, D. Koschny, J. Knollenberg, J.R. Kramm, E. Kuehrt, P. Lamy, L.M. Lara, M. Lazzarin, J.J. López-Moreno, F. Marzari, H. Michalik, G. Naletto, H. Rickman, R. Rodrigo, L. Sabau, N. Thomas, K.-P. Wenzel.)

    2012-06-01

    We present here a comparison of our results from ground-based observations of asteroid (21) Lutetia with imaging data acquired during the flyby of the asteroid by the ESA Rosetta mission. This flyby provided a unique opportunity to evaluate and calibrate our method of determination of size, 3-D shape, and spin of an asteroid from ground-based observations. Knowledge of certain observable physical properties of small bodies (e.g., size, spin, 3-D shape, and density) have far-reaching implications in furthering our understanding of these objects, such as composition, internal structure, and the effects of non-gravitational forces. We review the different observing techniques used to determine the above physical properties of asteroids and present our 3-D shape-modeling technique KOALA - Knitted Occultation, Adaptive-optics, and Lightcurve Analysis - which is based on multi-dataset inversion. We compare the results we obtained with KOALA, prior to the flyby, on asteroid (21) Lutetia with the high-spatial resolution images of the asteroid taken with the OSIRIS camera on-board the ESA Rosetta spacecraft, during its encounter with Lutetia on 2010 July 10. The spin axis determined with KOALA was found to be accurate to within 2°, while the KOALA diameter determinations were within 2% of the Rosetta-derived values. The 3-D shape of the KOALA model is also confirmed by the spectacular visual agreement between both 3-D shape models (KOALA pre- and OSIRIS post-flyby). We found a typical deviation of only 2 km at local scales between the profiles from KOALA predictions and OSIRIS images, resulting in a volume uncertainty provided by KOALA better than 10%. Radiometric techniques for the interpretation of thermal infrared data also benefit greatly from the KOALA shape model: the absolute size and geometric albedo can be derived with high accuracy, and thermal properties, for example the thermal inertia, can be determined unambiguously. The corresponding Lutetia analysis leads

  7. Hybrid LES RANS technique based on a one-equation near-wall model

    Science.gov (United States)

    Breuer, M.; Jaffrézic, B.; Arora, K.

    2008-05-01

    In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid

  8. A BLENDING TECHNIQUE OF TOPOGRAHIC AND HYDROGRAPHIC DEMs FOR RIVER ALIGNMENT MODELLING

    Directory of Open Access Journals (Sweden)

    H. Karim

    2017-10-01

    Full Text Available Current practice in combining bathymetry and topographic DEM is based on overlaying and merging both datasets into a new DEM based on river boundary. Throughout a few sample of datasets from previous recent projects, authors realize that this method is not preserving the nature of natural river characteristic, especially at the slope in between riverbank and riverbed. Some arising issues were also highlighted; validity of the topographic DEM as well as the river boundary, limitations of DEMs and how bathymetry survey was carried out on field. To overcome these issues, a new technique called blending DEMs was proposed and tested to the project datasets. It is based on a fusion of two DEMs (with respective buffer, offset and fusion ratio from a validated river boundary to produce riverbank slope and a merging of two different interpolation results to produce a best riverbed DEM. Simple riverbank ontology was prescribed to illustrate the model enhancement in accuracy and visualization provided by this technique. The output from three projects/DEM results was presented as a comparison study between the current practices with the proposed technique.

  9. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  10. The technique for 3D printing patient-specific models for auricular reconstruction.

    Science.gov (United States)

    Flores, Roberto L; Liss, Hannah; Raffaelli, Samuel; Humayun, Aiza; Khouri, Kimberly S; Coelho, Paulo G; Witek, Lukasz

    2017-06-01

    Currently, surgeons approach autogenous microtia repair by creating a two-dimensional (2D) tracing of the unaffected ear to approximate a three-dimensional (3D) construct, a difficult process. To address these shortcomings, this study introduces the fabrication of patient-specific, sterilizable 3D printed auricular model for autogenous auricular reconstruction. A high-resolution 3D digital photograph was captured of the patient's unaffected ear and surrounding anatomic structures. The photographs were exported and uploaded into Amira, for transformation into a digital (.stl) model, which was imported into Blender, an open source software platform for digital modification of data. The unaffected auricle as digitally isolated and inverted to render a model for the contralateral side. The depths of the scapha, triangular fossa, and cymba were deepened to accentuate their contours. Extra relief was added to the helical root to further distinguish this structure. The ear was then digitally deconstructed and separated into its individual auricular components for reconstruction. The completed ear and its individual components were 3D printed using polylactic acid filament and sterilized following manufacturer specifications. The sterilized models were brought to the operating room to be utilized by the surgeon. The models allowed for more accurate anatomic measurements compared to 2D tracings, which reduced the degree of estimation required by surgeons. Approximately 20 g of the PLA filament were utilized for the construction of these models, yielding a total material cost of approximately $1. Using the methodology detailed in this report, as well as departmentally available resources (3D digital photography and 3D printing), a sterilizable, patient-specific, and inexpensive 3D auricular model was fabricated to be used intraoperatively. This technique of printing customized-to-patient models for surgeons to use as 'guides' shows great promise. Copyright © 2017 European

  11. A numerical study of different projection-based model reduction techniques applied to computational homogenisation

    Science.gov (United States)

    Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia

    2017-10-01

    Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.

  12. Flow prediction models using macroclimatic variables and multivariate statistical techniques in the Cauca River Valley

    International Nuclear Information System (INIS)

    Carvajal Escobar Yesid; Munoz, Flor Matilde

    2007-01-01

    The project this centred in the revision of the state of the art of the ocean-atmospheric phenomena that you affect the Colombian hydrology especially The Phenomenon Enos that causes a socioeconomic impact of first order in our country, it has not been sufficiently studied; therefore it is important to approach the thematic one, including the variable macroclimates associated to the Enos in the analyses of water planning. The analyses include revision of statistical techniques of analysis of consistency of hydrological data with the objective of conforming a database of monthly flow of the river reliable and homogeneous Cauca. Statistical methods are used (Analysis of data multivariante) specifically The analysis of principal components to involve them in the development of models of prediction of flows monthly means in the river Cauca involving the Lineal focus as they are the model autoregressive AR, ARX and Armax and the focus non lineal Net Artificial Network.

  13. Data mining techniques for scientific computing: Application to asymptotic paraxial approximations to model ultrarelativistic particles

    Science.gov (United States)

    Assous, Franck; Chaskalovic, Joël

    2011-06-01

    We propose a new approach that consists in using data mining techniques for scientific computing. Indeed, data mining has proved to be efficient in other contexts which deal with huge data like in biology, medicine, marketing, advertising and communications. Our aim, here, is to deal with the important problem of the exploitation of the results produced by any numerical method. Indeed, more and more data are created today by numerical simulations. Thus, it seems necessary to look at efficient tools to analyze them. In this work, we focus our presentation to a test case dedicated to an asymptotic paraxial approximation to model ultrarelativistic particles. Our method directly deals with numerical results of simulations and try to understand what each order of the asymptotic expansion brings to the simulation results over what could be obtained by other lower-order or less accurate means. This new heuristic approach offers new potential applications to treat numerical solutions to mathematical models.

  14. Experimental methods and modeling techniques for description of cell population heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita; Nierychlo, M.; Lundin, L.

    2011-01-01

    With the continuous development, in the last decades, of analytical techniques providing complex information at single cell level, the study of cell heterogeneity has been the focus of several research projects within analytical biotechnology. Nonetheless, the complex interplay between...... environmental changes and cellular responses is yet not fully understood, and the integration of this new knowledge into the strategies for design, operation and control of bioprocesses is far from being an established reality. Indeed, the impact of cell heterogeneity on productivity of large scale cultivations...... methods for monitoring cell population heterogeneity as well as model frameworks suitable for describing dynamic heterogeneous cell populations. We will furthermore underline the highly important coordination between experimental and modeling efforts necessary to attain a reliable quantitative description...

  15. 3D modelling of trompe l'oeil decorated vaults using dense matching techniques

    Science.gov (United States)

    Chiabrando, F.; Lingua, A.; Noardo, F.; Spano, A.

    2014-05-01

    Dense matching techniques, implemented in many commercial and open source software, are useful instruments for carrying out a rapid and detailed analysis of complex objects, including various types of details and surfaces. For this reason these tools were tested in the metric survey of a frescoed ceiling in the hall of honour of a baroque building. The surfaces are covered with trompe-l'oeil paintings which theoretically can give a very good texture to automatic matching algorithms but in this case problems arise when attempting to reconstruct the correct geometry: in fact, in correspondence with the main architectonic painted details, the models present some irregularities, unexpectedly coherent with the painted drawing. The photogrammetric models have been compared with data deriving from a LIDAR survey of the same object, to evaluate the entity of this blunder: some profiles of selected sections have been extracted, verifying the different behaviours of the software tools.

  16. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  17. Quantitative seafloor characterization using angular backscatter data of the multi-beam echo-sounding system - Use of models and model free techniques

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    , International Conference on Coastal and Ocean Technology, pp. 293-300 QUANTITATIVE SEAFLOOR CHARACTERIZATION USING ANGULAR BACKSCATTER DATA OF THE MULTI-BEAM ECHO-SOUNDING SYSTEM- USE OF MODELS AND MODEL FREE TECHNIQUES Blshwajit Chakraborty National Institute... of the seafloor features, including textual parameters [1]. Presently available multi-beam echo-sounding techniques can provide bathymetric data with higher coverage, due to the use of faster, high-resolution signal processing techniques employed in the beam...

  18. Analytical model for Transient Current Technique (TCT) signal prediction and analysis for thin interface characterization

    Science.gov (United States)

    Bronuzzi, J.; Mapelli, A.; Sallese, J. M.

    2016-12-01

    A silicon wafer bonding technique has been recently proposed for the fabrication of monolithic silicon radiation detectors. This new process would enable direct bonding of a read-out electronic chip wafer on a highly resistive silicon substrate wafer. Therefore, monolithic silicon detectors could be fabricated in this way which would allow the free choice of electronic chips and high resistive silicon bulk, even from different providers. Moreover, a monolithic detector with a high resistive bulk would also be available. Electrical properties of the bonded interface are then critical for this application. Indeed, mobile charges generated by radiation inside the bonded bulk are expected to transit through the interface to be collected by the read-out electronics. In order to characterize this interface, the concept of Transient Current Technique (TCT) has been explored by means of numerical simulations combined with a physics based analytical model. In this work, the analytical model giving insight into the physics behind the TCT dependence upon interface traps is validated using both TCAD simulations and experimental measurements.

  19. Towards Systematic Prediction of Urban Heat Islands: Grounding Measurements, Assessing Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Jackson Voelkel

    2017-06-01

    Full Text Available While there exists extensive assessment of urban heat, we observe myriad methods for describing thermal distribution, factors that mediate temperatures, and potential impacts on urban populations. In addition, the limited spatial and temporal resolution of satellite-derived heat measurements may limit the capacity of decision makers to take effective actions for reducing mortalities in vulnerable populations whose locations require highly-refined measurements. Needed are high resolution spatial and temporal information for urban heat. In this study, we ask three questions: (1 how do urban heat islands vary throughout the day? (2 what statistical methods best explain the presence of temperatures at sub-meter spatial scales; and (3 what landscape features help to explain variation in urban heat islands? Using vehicle-based temperature measurements at three periods of the day in the Pacific Northwest city of Portland, Oregon (USA, we incorporate LiDAR-derived datasets, and evaluate three statistical techniques for modeling and predicting variation in temperatures during a heat wave. Our results indicate that the random forest technique best predicts temperatures, and that the evening model best explains the variation in temperature. The results suggest that ground-based measurements provide high levels of accuracy for describing the distribution of urban heat, its temporal variation, and specific locations where targeted interventions with communities can reduce mortalities from heat events.

  20. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae habitat and population densities

    Directory of Open Access Journals (Sweden)

    Khalifa M. Al-Kindi

    2017-08-01

    Full Text Available In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  1. Computational Modeling and Neuroimaging Techniques for Targeting during Deep Brain Stimulation

    Science.gov (United States)

    Sweet, Jennifer A.; Pace, Jonathan; Girgis, Fady; Miller, Jonathan P.

    2016-01-01

    Accurate surgical localization of the varied targets for deep brain stimulation (DBS) is a process undergoing constant evolution, with increasingly sophisticated techniques to allow for highly precise targeting. However, despite the fastidious placement of electrodes into specific structures within the brain, there is increasing evidence to suggest that the clinical effects of DBS are likely due to the activation of widespread neuronal networks directly and indirectly influenced by the stimulation of a given target. Selective activation of these complex and inter-connected pathways may further improve the outcomes of currently treated diseases by targeting specific fiber tracts responsible for a particular symptom in a patient-specific manner. Moreover, the delivery of such focused stimulation may aid in the discovery of new targets for electrical stimulation to treat additional neurological, psychiatric, and even cognitive disorders. As such, advancements in surgical targeting, computational modeling, engineering designs, and neuroimaging techniques play a critical role in this process. This article reviews the progress of these applications, discussing the importance of target localization for DBS, and the role of computational modeling and novel neuroimaging in improving our understanding of the pathophysiology of diseases, and thus paving the way for improved selective target localization using DBS. PMID:27445709

  2. Application of Tissue Culture and Transformation Techniques in Model Species Brachypodium distachyon.

    Science.gov (United States)

    Sogutmaz Ozdemir, Bahar; Budak, Hikmet

    2018-01-01

    Brachypodium distachyon has recently emerged as a model plant species for the grass family (Poaceae) that includes major cereal crops and forage grasses. One of the important traits of a model species is its capacity to be transformed and ease of growing both in tissue culture and in greenhouse conditions. Hence, plant transformation technology is crucial for improvements in agricultural studies, both for the study of new genes and in the production of new transgenic plant species. In this chapter, we review an efficient tissue culture and two different transformation systems for Brachypodium using most commonly preferred gene transfer techniques in plant species, microprojectile bombardment method (biolistics) and Agrobacterium-mediated transformation.In plant transformation studies, frequently used explant materials are immature embryos due to their higher transformation efficiencies and regeneration capacity. However, mature embryos are available throughout the year in contrast to immature embryos. We explain a tissue culture protocol for Brachypodium using mature embryos with the selected inbred lines from our collection. Embryogenic calluses obtained from mature embryos are used to transform Brachypodium with both plant transformation techniques that are revised according to previously studied protocols applied in the grasses, such as applying vacuum infiltration, different wounding effects, modification in inoculation and cocultivation steps or optimization of bombardment parameters.

  3. Biodegradable Magnesium Stent Treatment of Saccular Aneurysms in a Rat Model - Introduction of the Surgical Technique.

    Science.gov (United States)

    Nevzati, Edin; Rey, Jeannine; Coluccia, Daniel; D'Alonzo, Donato; Grüter, Basil; Remonda, Luca; Fandino, Javier; Marbacher, Serge

    2017-10-01

    The steady progess in the armamentarium of techniques available for endovascular treatment of intracranial aneurysms requires affordable and reproducable experimental animal models to test novel embolization materials such as stents and flow diverters. The aim of the present project was to design a safe, fast, and standardized surgical technique for stent assisted embolization of saccular aneurysms in a rat animal model. Saccular aneurysms were created from an arterial graft from the descending aorta.The aneurysms were microsurgically transplanted through end-to-side anastomosis to the infrarenal abdominal aorta of a syngenic male Wistar rat weighing >500 g. Following aneurysm anastomosis, aneurysm embolization was performed using balloon expandable magnesium stents (2.5 mm x 6 mm). The stent system was retrograde introduced from the lower abdominal aorta using a modified Seldinger technique. Following a pilot series of 6 animals, a total of 67 rats were operated according to established standard operating procedures. Mean surgery time, mean anastomosis time, and mean suturing time of the artery puncture site were 167 ± 22 min, 26 ± 6 min and 11 ± 5 min, respectively. The mortality rate was 6% (n=4). The morbidity rate was 7.5% (n=5), and in-stent thrombosis was found in 4 cases (n=2 early, n=2 late in stent thrombosis). The results demonstrate the feasibility of standardized stent occlusion of saccular sidewall aneurysms in rats - with low rates of morbidity and mortality. This stent embolization procedure combines the opportunity to study novel concepts of stent or flow diverter based devices as well as the molecular aspects of healing.

  4. The combination of satellite observation techniques for sequential ionosphere VTEC modeling

    Science.gov (United States)

    Erdogan, Eren; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Dettmering, Denise; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte; Mrotzek, Niclas

    2016-04-01

    The project OPTIMAP is a joint initiative by the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University of Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal is to develop an operational tool for ionospheric mapping and prediction (OPTIMAP). A key feature of the project is the combination of different satellite observation techniques to improve the spatio-temporal data coverage and the sensitivity for selected target parameters. In the current status, information about the vertical total electron content (VTEC) is derived from the dual frequency signal processing of four techniques: (1) Terrestrial observations of GPS and GLONASS ensure the high-resolution coverage of continental regions, (2) the satellite altimetry mission Jason-2 is taken into account to provide VTEC in nadir direction along the satellite tracks over the oceans, (3) GPS radio occultations to Formosat-3/COSMIC are exploited for the retrieval of electron density profiles that are integrated to obtain VTEC and (4) Jason-2 carrier-phase observations tracked by the on-board DORIS receiver are processed to determine the relative VTEC. All measurements are sequentially pre-processed in hourly batches serving as input data of a Kalman filter (KF) for modeling the global VTEC distribution. The KF runs in a predictor-corrector mode allowing for the sequential processing of the measurements where update steps are performed with one-minute sampling in the current configuration. The spatial VTEC distribution is represented by B-spline series expansions, i.e., the corresponding B-spline series coefficients together with additional technique-dependent unknowns such as Differential Code Biases and Intersystem Biases are estimated by the KF. As a preliminary solution, the prediction model to propagate the filter state through time is defined by a random

  5. Analysis of Composite Panel-Stiffener Debonding Using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James; Minguet, Pierre J.

    2007-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used successfully primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities, however, requires the successful demonstration of the methodology on the structural level. For this purpose, a panel was selected that is reinforced with stiffeners. Shear loading causes the panel to buckle, and the resulting out-of-plane deformations initiate skin/stiffener separation at the location of an embedded defect. A small section of the stiffener foot, web and noodle as well as the panel skin in the vicinity of the delamination front were modeled with a local 3D solid model. Across the width of the stiffener foot, the mixedmode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. Computed failure indices were compared to corresponding results where the entire web was modeled with shell elements and only a small section of the stiffener foot and panel were modeled locally with solid elements. Including the stiffener web in the local 3D solid model increased the computed failure index. Further including the noodle and transition radius in the local 3D solid model changed the local distribution across the width. The magnitude of the failure index decreased with increasing transition radius and noodle area. For the transition radii modeled, the material properties used for the noodle area had a negligible effect on the results. The results of this study are intended to be used as a guide for conducting finite element and fracture mechanics analyses of delamination and debonding in complex structures such as integrally stiffened panels.

  6. Near-real-time regional troposphere models for the GNSS precise point positioning technique

    International Nuclear Information System (INIS)

    Hadas, T; Kaplon, J; Bosy, J; Sierny, J; Wilgan, K

    2013-01-01

    The GNSS precise point positioning (PPP) technique requires high quality product (orbits and clocks) application, since their error directly affects the quality of positioning. For real-time purposes it is possible to utilize ultra-rapid precise orbits and clocks which are disseminated through the Internet. In order to eliminate as many unknown parameters as possible, one may introduce external information on zenith troposphere delay (ZTD). It is desirable that the a priori model is accurate and reliable, especially for real-time application. One of the open problems in GNSS positioning is troposphere delay modelling on the basis of ground meteorological observations. Institute of Geodesy and Geoinformatics of Wroclaw University of Environmental and Life Sciences (IGG WUELS) has developed two independent regional troposphere models for the territory of Poland. The first one is estimated in near-real-time regime using GNSS data from a Polish ground-based augmentation system named ASG-EUPOS established by Polish Head Office of Geodesy and Cartography (GUGiK) in 2008. The second one is based on meteorological parameters (temperature, pressure and humidity) gathered from various meteorological networks operating over the area of Poland and surrounding countries. This paper describes the methodology of both model calculation and verification. It also presents results of applying various ZTD models into kinematic PPP in the post-processing mode using Bernese GPS Software. Positioning results were used to assess the quality of the developed models during changing weather conditions. Finally, the impact of model application to simulated real-time PPP on precision, accuracy and convergence time is discussed. (paper)

  7. Review of the phenomenon of fluidization and its numerical modelling techniques

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-10-01

    Full Text Available The paper introduces the phenomenon of fluidization as a process. Fluidization occurs when a fluid (liquid or gas is pushed upwards through a bed of granular material. This may make the granular material to behave like a liquid and, for example, keep a level meniscus on a tilted container, or make a lighter object float on top and a heavier object sink to the bottom. The behavior of the granular material, when fluidized, depends on the superficial gas velocity, particle size, particle density, and fluid properties resulting in various regimes of fluidization. These regimes are discussed in detail in the paper. This paper also discusses the application of fluidized beds from its early usage in the Winkler coal gasifier to more recent applications for manufacturing of carbon nano-tubes. In addition, Geldart grouping based on the range of particle sizes is discussed. The minimum fluidization condition is defined and it is demonstrated that it may be registered slightly different when particles are being fluidized or de-fluidized. The paper presents discussion on three numerical modelling techniques: the two fluid model, unresolved fluid-particle model and resolved fluid particle model. The two fluid model is often referred to Eulerian-Eulerian method of solution and assumes particles as well as fluid as continuum. The unresolved and resolved fluid-particle models are based on Eulerian-Lagrangian method of solution. The key difference between them is the whether to use a drag correlation or solve the boundary layer around the particles. The paper ends with the discussion on the applicability of these models.

  8. DEVELOPMENT OF RESERVOIR CHARACTERIZATION TECHNIQUES AND PRODUCTION MODELS FOR EXPLOITING NATURALLY FRACTURED RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Michael L. Wiggins; Raymon L. Brown; Faruk Civan; Richard G. Hughes

    2002-12-31

    For many years, geoscientists and engineers have undertaken research to characterize naturally fractured reservoirs. Geoscientists have focused on understanding the process of fracturing and the subsequent measurement and description of fracture characteristics. Engineers have concentrated on the fluid flow behavior in the fracture-porous media system and the development of models to predict the hydrocarbon production from these complex systems. This research attempts to integrate these two complementary views to develop a quantitative reservoir characterization methodology and flow performance model for naturally fractured reservoirs. The research has focused on estimating naturally fractured reservoir properties from seismic data, predicting fracture characteristics from well logs, and developing a naturally fractured reservoir simulator. It is important to develop techniques that can be applied to estimate the important parameters in predicting the performance of naturally fractured reservoirs. This project proposes a method to relate seismic properties to the elastic compliance and permeability of the reservoir based upon a sugar cube model. In addition, methods are presented to use conventional well logs to estimate localized fracture information for reservoir characterization purposes. The ability to estimate fracture information from conventional well logs is very important in older wells where data are often limited. Finally, a desktop naturally fractured reservoir simulator has been developed for the purpose of predicting the performance of these complex reservoirs. The simulator incorporates vertical and horizontal wellbore models, methods to handle matrix to fracture fluid transfer, and fracture permeability tensors. This research project has developed methods to characterize and study the performance of naturally fractured reservoirs that integrate geoscience and engineering data. This is an important step in developing exploitation strategies for

  9. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  10. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    Science.gov (United States)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  11. Improving the performance of streamflow forecasting model using data-preprocessing technique in Dungun River Basin

    Science.gov (United States)

    Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd

    2018-03-01

    An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).

  12. Modelling desertification risk in the north-west of Jordan using geospatial and remote sensing techniques

    Directory of Open Access Journals (Sweden)

    Jawad T. Al-Bakri

    2016-03-01

    Full Text Available Remote sensing, climate, and ground data were used within a geographic information system (GIS to map desertification risk in the north-west of Jordan. The approach was based on modelling wind and water erosion and incorporating the results with a map representing the severity of drought. Water erosion was modelled by the universal soil loss equation, while wind erosion was modelled by a dust emission model. The extent of drought was mapped using the evapotranspiration water stress index (EWSI which incorporated actual and potential evapotranspiration. Output maps were assessed within GIS in terms of spatial patterns and the degree of correlation with soil surficial properties. Results showed that both topography and soil explained 75% of the variation in water erosion, while soil explained 25% of the variation in wind erosion, which was mainly controlled by natural factors of topography and wind. Analysis of the EWSI map showed that drought risk was dominating most of the rainfed areas. The combined effects of soil erosion and drought were reflected on the desertification risk map. The adoption of these geospatial and remote sensing techniques is, therefore, recommended to map desertification risk in Jordan and in similar arid environments.

  13. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    Science.gov (United States)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  14. Flap surgical techniques for incisional hernia recurrences. A swine experimental model.

    Science.gov (United States)

    Popa, Florina; Ardelean, Filip; Pestean, Cosmin; Purdoiu, Robert; Rosca, Oana; Georgescu, Alexandru

    2017-01-01

    In the age of synthetic prostheses most of hernia studies include a careful examination of the various types of prosthesis, their characteristics and their repair indications. Biological prostheses are also beginning to draw attention. But in terms of recurrence especially for poor or developing countries, the discussion is different, due to their high cost which makes them difficult to afford. In this article we present new flap reconstruction techniques for the reconstruction of the abdominal wall versus mesh repair, applied on swine models, outline the results of each technique, and specify the indications for their use. An experimental protocol using four swine models (PIC-FII-337 hybrid breed pigs), five months old, was conducted. All animal care and operative procedures were studied following the protocol approved by the Ethics Committee of the University of Medicine and Pharmacy resolution no. 281/2014 of the Department of Surgery of the University of Agricultural Sciences and Veterinary Medicine); the study was carried out between November 2015 and February 2016. The primary objective was to compare the effect of surgical strategies in the treatment of the abdominal wall defect using variable flaps versus mesh repair in a large-animal models. Physical examination and ultrasound imaging of the abdominal wall repair were done on determined periods, during one month. The complications occurring after the abdominal wall repair were edema, collections, superficial dehiscence an recurrences. No recurrences were reported at one month results, all seromas reported were solved over time by natural drainage. Superficial necrosis appeared in two swine models and superficial dehiscence occurred in one model, the perforator "plus" flap. Mesh infection was detected in the "onlay" swine model. In terms of recurrences, contaminated abdominal wall defects or other contraindications to the use of prosthetic materials, biological mesh repair or flap surgery are the only

  15. Mapping Tamarix: New techniques for field measurements, spatial modeling and remote sensing

    Science.gov (United States)

    Evangelista, Paul H.

    Native riparian ecosystems throughout the southwestern United States are being altered by the rapid invasion of Tamarix species, commonly known as tamarisk. The effects that tamarisk has on ecosystem processes have been poorly quantified largely due to inadequate survey methods. I tested new approaches for field measurements, spatial models and remote sensing to improve our ability measure and to map tamarisk occurrence, and provide new methods that will assist in management and control efforts. Examining allometric relationships between basal cover and height measurements collected in the field, I was able to produce several models to accurately estimate aboveground biomass. The best two models were explained 97% of the variance (R 2 = 0.97). Next, I tested five commonly used predictive spatial models to identify which methods performed best for tamarisk using different types of data collected in the field. Most spatial models performed well for tamarisk, with logistic regression performing best with an Area Under the receiver-operating characteristic Curve (AUC) of 0.89 and overall accuracy of 85%. The results of this study also suggested that models may not perform equally with different invasive species, and that results may be influenced by species traits and their interaction with environmental factors. Lastly, I tested several approaches to improve the ability to remotely sense tamarisk occurrence. Using Landsat7 ETM+ satellite scenes and derived vegetation indices for six different months of the growing season, I examined their ability to detect tamarisk individually (single-scene analyses) and collectively (time-series). My results showed that time-series analyses were best suited to distinguish tamarisk from other vegetation and landscape features (AUC = 0.96, overall accuracy = 90%). June, August and September were the best months to detect unique phenological attributes that are likely related to the species' extended growing season and green-up during

  16. The effects of climate downscaling technique and observational data set on modeled ecological responses.

    Science.gov (United States)

    Pourmokhtarian, Afshin; Driscoll, Charles T; Campbell, John L; Hayhoe, Katharine; Stoner, Anne M K

    2016-07-01

    carefully considering field observations used for training, as well as the downscaling method used to generate climate change projections, for smaller-scale modeling studies. Different sources of variability including selection of AOGCM, emissions scenario, downscaling technique, and data used for training downscaling models, result in a wide range of projected forest ecosystem responses to future climate change. © 2016 by the Ecological Society of America.

  17. Numerical model calibration with the use of an observed sediment mobility mapping technique.

    Science.gov (United States)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2017-04-01

    2 mm) and ii) a novel time-lapse imagery technique used to identify areas of incipient motion. Using the numerical model Delft3D Flow, the experiments were simulated and observed incipient motion and modeled shear stress were compared to evaluate the model's ability to accurately predict sediment transport. Observed and model results were evaluated and compared, which identified a motion threshold and the ability to evaluate the model's performance. To quantify model performance, the ratios of correctly predicted areas divided by total area were calculated and produced a 75% inundation accuracy with a 71% incipient motion accuracy. Inundation accuracies are comparable to reported field studies of braided rivers with highly accurate topographic acquisition. Nevertheless, 75% inundation accuracy is less than ideal, and likely suffers from the complicated topography, shallow water depth (average 1 cm), and the corresponding model's inaccuracies that could derive from even subtle 2 mm elevation errors. As shear stress calculations are dependent upon inundation and depth, the sediment transport accuracies likely suffer from the same issues. Regardless, the sediment transport accuracies are very comparable to inundation accuracies, which is an encouraging result. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917

  18. Improving building energy modelling by applying advanced 3D surveying techniques on agri-food facilities

    Directory of Open Access Journals (Sweden)

    Francesco Barreca

    2017-09-01

    advanced surveying techniques, such as a terrestrial laser scanner and an infrared camera, it is possible to create a three-dimensional parametric model, while, thanks to the heat flow meter measurement method, it is also possible to obtain a thermophysical model. This model allows assessing the energy performance of agri-food buildings in order to improve the indoor microclimate control and the conditions of food processing and conservation.

  19. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    Science.gov (United States)

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to

  20. Recognition of landforms from digital elevation models and satellite imagery with expert systems, pattern recognition and image processing techniques

    OpenAIRE

    Miliaresis, George

    2014-01-01

    Recognition of landforms from digital elevation models and satellite imagery with expert systems, pattern recognition and image processing techniques. PhD Thesis, Remote Sensing & Terrain Pattern Recognition),National Technical University of Athens, Dpt. of Topography (2000).

  1. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    International Nuclear Information System (INIS)

    Rasam, A R A; Ghazali, R; Noor, A M M; Mohd, W M N W; Hamid, J R A; Bazlan, M J; Ahmad, N

    2014-01-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia

  2. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    Science.gov (United States)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  3. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  4. FUNCTIONAL MODEL OF THE MATERIAL RESOURCES MANAGEMENT FOR PROJECTS OF THE CREATION OF NEW TECHNIQUES

    Directory of Open Access Journals (Sweden)

    S. Yu. Danshyna

    2016-01-01

    Full Text Available The article is devoted to problem of material management arising in the implementation of projects for the development and creation (modernization of the new techniques. The uniqueness of the projects, their limit on the cost and time does not allow the use of traditional approaches to resource management. Such projects are often implemented in the development of companies; where it is not possible to abandon the traditional operating methods of management. The aim of the article is a formalization of the process of material management of projects, a description of its information flows for integrate into the project management practices and for improve the efficiency of material management. For the systematization of information arising from the material resources management, invited the set-theoretic representation of the management process. According with the requirements of project management standards were described the sets and defined rules of their transformation. Specification of the set-theoretic representation helped to establish the area and limits of the modelling process. Further decomposition process became the basis of the functional model, constructed in accordance with the methodology IDEF 0. A graphical representation of the model allows you to visualize the process at different levels of detail. For specification of issues related to the organization and promotion of material flow, were developed functional models of sub-processes and were described the identified data-flows. For the harmonization of process and project approaches formulated conditions for evaluating the efficiency of material management. The developed models can be the basis for designing the structure of companies, for regulation of their project activities, as well as for establishing an information system of management resources of projects.

  5. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection †

    Science.gov (United States)

    Delaney, Declan T.; O’Hare, Gregory M. P.

    2016-01-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929

  6. Statistical techniques for modeling extreme price dynamics in the energy market

    International Nuclear Information System (INIS)

    Mbugua, L N; Mwita, P N

    2013-01-01

    Extreme events have large impact throughout the span of engineering, science and economics. This is because extreme events often lead to failure and losses due to the nature unobservable of extra ordinary occurrences. In this context this paper focuses on appropriate statistical methods relating to a combination of quantile regression approach and extreme value theory to model the excesses. This plays a vital role in risk management. Locally, nonparametric quantile regression is used, a method that is flexible and best suited when one knows little about the functional forms of the object being estimated. The conditions are derived in order to estimate the extreme value distribution function. The threshold model of extreme values is used to circumvent the lack of adequate observation problem at the tail of the distribution function. The application of a selection of these techniques is demonstrated on the volatile fuel market. The results indicate that the method used can extract maximum possible reliable information from the data. The key attraction of this method is that it offers a set of ready made approaches to the most difficult problem of risk modeling.

  7. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  8. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    Science.gov (United States)

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  9. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection

    Directory of Open Access Journals (Sweden)

    Declan T. Delaney

    2016-12-01

    Full Text Available No single network solution for Internet of Things (IoT networks can provide the required level of Quality of Service (QoS for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  10. Global ocean modeling and rendering techniques based on ellipsoid Rectangular grid mapping

    Science.gov (United States)

    Ma, W.; Wan, G.; Wang, L.; Li, W. J.

    2013-10-01

    Summary: In recent years, with the development of the virtual reality technology and data acquisition technology, people increase the demand of GIS visualization. Especially accounting for occupying 70 percent of global area, and as a based environmental visualization, global ocean visualization is particularly important in some applications. This paper studies the global ocean visualization and modeling techniques under the framework of the WGS84 ellipsoid and achieves a method of rapid global ocean photorealistic rendering. The main research works are as follow: 1. In the height field modeling, with the law of statistical and spectral marine and Phillip wave spectrum, we can produce a single height map which considers the wind farm on the ocean wave magnitude of impact; 2. With ellipsoid rectangular grid mapping relationship, the single height map produced above will be mapped to the ellipsoid repeatedly, and achieve a goal of global ocean height field modeling; 3. With the conversion of screen space coordinate system and the rectangular spatial coordinate system, sampling points can be acquired by the view-dependent ellipsoid; 4. With the introduction of global bathymetric data, and came through the GPU for rapid sampling, so that we can get sampling points related to transparency and depth values to achieve a global ocean and land border processing.

  11. Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique

    Directory of Open Access Journals (Sweden)

    Jahedul Islam Chowdhury

    2015-12-01

    Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.

  12. An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models

    Science.gov (United States)

    Towner, Robert L.; Band, Jonathan L.

    2012-01-01

    An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.

  13. Pressure Measurement Techniques for Abdominal Hypertension: Conclusions from an Experimental Model.

    Science.gov (United States)

    Chopra, Sascha Santosh; Wolf, Stefan; Rohde, Veit; Freimann, Florian Baptist

    2015-01-01

    Introduction. Intra-abdominal pressure (IAP) measurement is an indispensable tool for the diagnosis of abdominal hypertension. Different techniques have been described in the literature and applied in the clinical setting. Methods. A porcine model was created to simulate an abdominal compartment syndrome ranging from baseline IAP to 30 mmHg. Three different measurement techniques were applied, comprising telemetric piezoresistive probes at two different sites (epigastric and pelvic) for direct pressure measurement and intragastric and intravesical probes for indirect measurement. Results. The mean difference between the invasive IAP measurements using telemetric pressure probes and the IVP measurements was -0.58 mmHg. The bias between the invasive IAP measurements and the IGP measurements was 3.8 mmHg. Compared to the realistic results of the intraperitoneal and intravesical measurements, the intragastric data showed a strong tendency towards decreased values. The hydrostatic character of the IAP was eliminated at high-pressure levels. Conclusion. We conclude that intragastric pressure measurement is potentially hazardous and might lead to inaccurately low intra-abdominal pressure values. This may result in missed diagnosis of elevated abdominal pressure or even ACS. The intravesical measurements showed the most accurate values during baseline pressure and both high-pressure plateaus.

  14. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Science.gov (United States)

    Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.

    2015-01-01

    It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248

  15. Techniques and Technology to Revise Content Delivery and Model Critical Thinking in the Neuroscience Classroom.

    Science.gov (United States)

    Illig, Kurt R

    2015-01-01

    Undergraduate neuroscience courses typically involve highly interdisciplinary material, and it is often necessary to use class time to review how principles of chemistry, math and biology apply to neuroscience. Lecturing and Socratic discussion can work well to deliver information to students, but these techniques can lead students to feel more like spectators than participants in a class, and do not actively engage students in the critical analysis and application of experimental evidence. If one goal of undergraduate neuroscience education is to foster critical thinking skills, then the classroom should be a place where students and instructors can work together to develop them. Students learn how to think critically by directly engaging with course material, and by discussing evidence with their peers, but taking classroom time for these activities requires that an instructor find a way to provide course materials outside of class. Using technology as an on-demand provider of course materials can give instructors the freedom to restructure classroom time, allowing students to work together in small groups and to have discussions that foster critical thinking, and allowing the instructor to model these skills. In this paper, I provide a rationale for reducing the use of traditional lectures in favor of more student-centered activities, I present several methods that can be used to deliver course materials outside of class and discuss their use, and I provide a few examples of how these techniques and technologies can help improve learning outcomes.

  16. Pencilbeam irradiation technique for whole brain radiotherapy: technical and biological challenges in a small animal model.

    Science.gov (United States)

    Schültke, Elisabeth; Trippel, Michael; Bräuer-Krisch, Elke; Renier, Michel; Bartzsch, Stefan; Requardt, Herwig; Döbrössy, Máté D; Nikkhah, Guido

    2013-01-01

    We have conducted the first in-vivo experiments in pencilbeam irradiation, a new synchrotron radiation technique based on the principle of microbeam irradiation, a concept of spatially fractionated high-dose irradiation. In an animal model of adult C57 BL/6J mice we have determined technical and physiological limitations with the present technical setup of the technique. Fifty-eight animals were distributed in eleven experimental groups, ten groups receiving whole brain radiotherapy with arrays of 50 µm wide beams. We have tested peak doses ranging between 172 Gy and 2,298 Gy at 3 mm depth. Animals in five groups received whole brain radiotherapy with a center-to-center (ctc) distance of 200 µm and a peak-to-valley ratio (PVDR) of ∼ 100, in the other five groups the ctc was 400 µm (PVDR ∼ 400). Motor and memory abilities were assessed during a six months observation period following irradiation. The lower dose limit, determined by the technical equipment, was at 172 Gy. The LD50 was about 1,164 Gy for a ctc of 200 µm and higher than 2,298 Gy for a ctc of 400 µm. Age-dependent loss in motor and memory performance was seen in all groups. Better overall performance (close to that of healthy controls) was seen in the groups irradiated with a ctc of 400 µm.

  17. The Potential for Zinc Stable Isotope Techniques and Modelling to Determine Optimal Zinc Supplementation

    Directory of Open Access Journals (Sweden)

    Cuong D. Tran

    2015-05-01

    Full Text Available It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease.

  18. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  19. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  20. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  1. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...... and distillation temperatures. The experimental results are compared with the model predictions as well as the extended calculations in VPPD-Lab....

  2. D Modeling of a Bazaar in Ancient Harran City Using Laser Scanning Technique

    Science.gov (United States)

    Senol, H. I.; Erdogan, S.; Onal, M.; Ulukavak, M.; Memduhoglu, A.; Mutlu, S.; Ernst, F. B.; Yilmaz, M.

    2017-11-01

    Turkey is a rich country in historical monuments. In the district of Harran, Şanlıurfa province, the work was done, an ancient city and many other ruins beside the world's first university can be found. Considering the climate and sensitive structure of the studied region 3D modeling is a suitable technique. By means of such works reconstruction, that can show us the former state of the region will be enabled at a later point of time. In case the historical site would be destroyed in any way, it will be useful for recording the work as a visual and digital resource. Then, when the work has to be restored, the data can be used as a base and realistic restoration projects could be carried out.

  3. Model-based orientation-independent 3-D machine vision techniques

    Science.gov (United States)

    De Figueiredo, R. J. P.; Kehtarnavaz, N.

    1988-01-01

    Orientation-dependent techniques for the identification of a three-dimensional object by a machine vision system are represented in parts. In the first part, the data consist of intensity images of polyhedral objects obtained by a single camera, while in the second part, the data consist of range images of curved objects obtained by a laser scanner. In both cases, the attributed graphic representation of the object surface is used to drive the respective algorithm. In this representation, a graph node represents a surface patch and a link represents the adjacency between two patches. The attributes assigned to nodes are moment invariants of the corresponding face for polyhedral objects. For range images, the Gaussian curvature is used as a segmentation criterion for providing symbolic shape attributes. Identification is achieved by an efficient graph-matching algorithm used to match the graph obtained from the data to a subgraph of one of the model graphs stored in the commputer memory.

  4. A review of fatigue crack propagation modelling techniques using FEM and XFEM

    Science.gov (United States)

    Rege, K.; Lemu, H. G.

    2017-12-01

    Fatigue is one of the main causes of failures in mechanical and structural systems. Offshore installations, in particular, are susceptible to fatigue failure due to their exposure to the combination of wind loads, wave loads and currents. In order to assess the safety of the components of these installations, the expected lifetime of the component needs to be estimated. The fatigue life is the sum of the number of loading cycles required for a fatigue crack to initiate, and the number of cycles required for the crack to propagate before sudden fracture occurs. Since analytical determination of the fatigue crack propagation life in real geometries is rarely viable, crack propagation problems are normally solved using some computational method. In this review the use of the finite element method (FEM) and the extended finite element method (XFEM) to model fatigue crack propagation is discussed. The basic techniques are presented, together with some of the recent developments.

  5. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  6. Effective field theory with differential operator technique for dynamic phase transition in ferromagnetic Ising model

    International Nuclear Information System (INIS)

    Kinoshita, Takehiro; Fujiyama, Shinya; Idogaki, Toshihiro; Tokita, Masahiko

    2009-01-01

    The non-equilibrium phase transition in a ferromagnetic Ising model is investigated by use of a new type of effective field theory (EFT) which correctly accounts for all the single-site kinematic relations by differential operator technique. In the presence of a time dependent oscillating external field, with decrease of the temperature the system undergoes a dynamic phase transition, which is characterized by the period averaged magnetization Q, from a dynamically disordered state Q = 0 to the dynamically ordered state Q ≠ 0. The results of the dynamic phase transition point T c determined from the behavior of the dynamic magnetization and the Liapunov exponent provided by EFT are improved than that of the standard mean field theory (MFT), especially for the one dimensional lattice where the standard MFT gives incorrect result of T c = 0 even in the case of zero external field.

  7. Dynamic P-Technique for Modeling Patterns of Data: Applications to Pediatric Psychology Research

    Science.gov (United States)

    Aylward, Brandon S.; Rausch, Joseph R.

    2011-01-01

    Objective Dynamic p-technique (DPT) is a potentially useful statistical method for examining relationships among dynamic constructs in a single individual or small group of individuals over time. The purpose of this article is to offer a nontechnical introduction to DPT. Method An overview of DPT analysis, with an emphasis on potential applications to pediatric psychology research, is provided. To illustrate how DPT might be applied, an example using simulated data is presented for daily pain and negative mood ratings. Results The simulated example demonstrates the application of DPT to a relevant pediatric psychology research area. In addition, the potential application of DPT to the longitudinal study of adherence is presented. Conclusion Although it has not been utilized frequently within pediatric psychology, DPT could be particularly well-suited for research in this field because of its ability to powerfully model repeated observations from very small samples. PMID:21486938

  8. Analysis of arbitrary defects in photonic crystals by use of the source-model technique.

    Science.gov (United States)

    Ludwig, Alon; Leviatan, Yehuda

    2004-07-01

    A novel method derived from the source-model technique is presented to solve the problem of scattering of an electromagnetic plane wave by a two-dimensional photonic crystal slab that contains an arbitrary defect (perturbation). In this method, the electromagnetic fields in the perturbed problem are expressed in terms of the field due to the periodic currents obtained from a solution of the corresponding unperturbed problem plus the field due to yet-to-be-determined correction current sources placed in the vicinity of the perturbation. Appropriate error measures are suggested, and a few representative structures are presented and analyzed to demonstrate the versatility of the proposed method and to provide physical insight into waveguiding and defect coupling mechanisms typical of finite-thickness photonic crystal slabs.

  9. Source-model technique analysis of electromagnetic scattering by surface grooves and slits.

    Science.gov (United States)

    Trotskovsky, Konstantin; Leviatan, Yehuda

    2011-04-01

    A computational tool, based on the source-model technique (SMT), for analysis of electromagnetic wave scattering by surface grooves and slits is presented. The idea is to use a superposition of the solution of the unperturbed problem and local corrections in the groove/slit region (the grooves and slits are treated as perturbations). In this manner, the solution is obtained in a much faster way than solving the original problem. The proposed solution is applied to problems of grooves and slits in otherwise planar or periodic surfaces. Grooves and slits of various shapes, both smooth ones as well as ones with edges, empty or filled with dielectric material, are considered. The obtained results are verified against previously published data. © 2011 Optical Society of America

  10. Suppression of Spiral Waves by Voltage Clamp Techniques in a Conductance-Based Cardiac Tissue Model

    International Nuclear Information System (INIS)

    Lian-Chun, Yu; Guo-Yong, Zhang; Yong, Chen; Jun, Ma

    2008-01-01

    A new control method is proposed to control the spatio-temporal dynamics in excitable media, which is described by the Morris–Lecar cells model. It is confirmed that successful suppression of spiral waves can be obtained by spatially clamping the membrane voltage of the excitable cells. The low voltage clamping induces breakup of spiral waves and the fragments are soon absorbed by low voltage obstacles, whereas the high voltage clamping generates travel waves that annihilate spiral waves through collision with them. However, each method has its shortcomings. Furthermore, a two-step method that combines both low and high voltage clamp techniques is then presented as a possible way of out this predicament. (cross-disciplinary physics and related areas of science and technology)

  11. Developing material for promoting problem-solving ability through bar modeling technique

    Science.gov (United States)

    Widyasari, N.; Rosiyanti, H.

    2018-01-01

    This study aimed at developing material for enhancing problem-solving ability through bar modeling technique with thematic learning. Polya’s steps of problem-solving were chosen as the basis of the study. The methods of the study were research and development. The subject of this study were five teen students of the fifth grade of Lab-school FIP UMJ elementary school. Expert review and student’ response analysis were used to collect the data. Furthermore, the data were analyzed using qualitative descriptive and quantitative. The findings showed that material in theme “Selalu Berhemat Energi” was categorized as valid and practical. The validity was measured by using the aspect of language, contents, and graphics. Based on the expert comments, the materials were easy to implement in the teaching-learning process. In addition, the result of students’ response showed that material was both interesting and easy to understand. Thus, students gained more understanding in learning problem-solving.

  12. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  13. Dynamic p-technique for modeling patterns of data: applications to pediatric psychology research.

    Science.gov (United States)

    Nelson, Timothy D; Aylward, Brandon S; Rausch, Joseph R

    2011-10-01

    Dynamic p-technique (DPT) is a potentially useful statistical method for examining relationships among dynamic constructs in a single individual or small group of individuals over time. The purpose of this article is to offer a nontechnical introduction to DPT. An overview of DPT analysis, with an emphasis on potential applications to pediatric psychology research, is provided. To illustrate how DPT might be applied, an example using simulated data is presented for daily pain and negative mood ratings. The simulated example demonstrates the application of DPT to a relevant pediatric psychology research area. In addition, the potential application of DPT to the longitudinal study of adherence is presented. Although it has not been utilized frequently within pediatric psychology, DPT could be particularly well-suited for research in this field because of its ability to powerfully model repeated observations from very small samples.

  14. 3-D thermo-mechanical laboratory modeling of plate-tectonics: modeling scheme, technique and first experiments

    Directory of Open Access Journals (Sweden)

    D. Boutelier

    2011-05-01

    Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.

  15. Lattice Boltzmann flow simulations with applications of reduced order modeling techniques

    KAUST Repository

    Brown, Donald

    2014-01-01

    With the recent interest in shale gas, an understanding of the flow mechanisms at the pore scale and beyond is necessary, which has attracted a lot of interest from both industry and academia. One of the suggested algorithms to help understand flow in such reservoirs is the Lattice Boltzmann Method (LBM). The primary advantage of LBM is its ability to approximate complicated geometries with simple algorithmic modificatoins. In this work, we use LBM to simulate the flow in a porous medium. More specifically, we use LBM to simulate a Brinkman type flow. The Brinkman law allows us to integrate fast free-flow and slow-flow porous regions. However, due to the many scales involved and complex heterogeneities of the rock microstructure, the simulation times can be long, even with the speed advantage of using an explicit time stepping method. The problem is two-fold, the computational grid must be able to resolve all scales and the calculation requires a steady state solution implying a large number of timesteps. To help reduce the computational complexity and total simulation times, we use model reduction techniques to reduce the dimension of the system. In this approach, we are able to describe the dynamics of the flow by using a lower dimensional subspace. In this work, we utilize the Proper Orthogonal Decomposition (POD) technique, to compute the dominant modes of the flow and project the solution onto them (a lower dimensional subspace) to arrive at an approximation of the full system at a lowered computational cost. We present a few proof-of-concept examples of the flow field and the corresponding reduced model flow field.

  16. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  17. The feasibility of using particle-in-cell technique for modeling liquid crystal devices

    Science.gov (United States)

    Leung, Wing Ching

    1997-12-01

    Liquid crystal materials are used in a variety of electronic devices, but the dynamical behavior of such devices is still not clearly understood. The primary reason for this is that there is a lack of rigorous models for a theoretical treatment for the devices. For the first time we demonstrate here the feasibility of treating such liquid crystal devices using the particle- in-cell (PIC) simulation technique. In a PIC simulation model, the liquid crystal medium is represented by a certain number of particles, each particle representing a fairly large number of real molecules in the medium. The dynamical behavior of the medium is obtained by solving the equations of motions of the particles in self- consistent appropriate fields. The motions may include both translation (flow) and rotation. By neglecting the translational motion and the conductive effects of the materials in this first innovative effort, 1- and 2- dimensional PIC codes, including viscous, electric, and elastic torques affecting the rotation of molecules, are developed for modeling a parallel-plate capacitor cell filled with a nematic liquid crystal material. Using these codes we reproduce the well known phenomenon of Fredericksz transition. Our simulations yield results in excellent agreement with the available analytical results on the threshold voltage for the Fredericksz transition. In addition, we are able to reveal the dynamical behavior of liquid crystal molecules in the device as the material undergoes the Fredericksz transition in response to an applied voltage across the liquid crystal cell. By using the 2-dimensional PIC code to simulate a segmented electrode cell, like those used in liquid crystal devices for optical gratings, the simulation results show the formation and dynamics of the defect walls. The defect wall joins the regions of the material having topologically different orientations. We found that the defect wall is associated with a large gradient in the polarization vector P

  18. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    Science.gov (United States)

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  19. Reducing the impact of a desalination plant using stochastic modeling and optimization techniques

    Science.gov (United States)

    Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François

    2009-02-01

    SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer

  20. Building predictive models for MERS-CoV infections using data mining techniques.

    Science.gov (United States)

    Al-Turaiki, Isra; Alshahrani, Mona; Almutairi, Tahani

    Recently, the outbreak of MERS-CoV infections caused worldwide attention to Saudi Arabia. The novel virus belongs to the coronaviruses family, which is responsible for causing mild to moderate colds. The control and command center of Saudi Ministry of Health issues a daily report on MERS-CoV infection cases. The infection with MERS-CoV can lead to fatal complications, however little information is known about this novel virus. In this paper, we apply two data mining techniques in order to better understand the stability and the possibility of recovery from MERS-CoV infections. The Naive Bayes classifier and J48 decision tree algorithm were used to build our models. The dataset used consists of 1082 records of cases reported between 2013 and 2015. In order to build our prediction models, we split the dataset into two groups. The first group combined recovery and death records. A new attribute was created to indicate the record type, such that the dataset can be used to predict the recovery from MERS-CoV. The second group contained the new case records to be used to predict the stability of the infection based on the current status attribute. The resulting recovery models indicate that healthcare workers are more likely to survive. This could be due to the vaccinations that healthcare workers are required to get on regular basis. As for the stability models using J48, two attributes were found to be important for predicting stability: symptomatic and age. Old patients are at high risk of developing MERS-CoV complications. Finally, the performance of all the models was evaluated using three measures: accuracy, precision, and recall. In general, the accuracy of the models is between 53.6% and 71.58%. We believe that the performance of the prediction models can be enhanced with the use of more patient data. As future work, we plan to directly contact hospitals in Riyadh in order to collect more information related to patients with MERS-CoV infections. Copyright © 2016

  1. Modeling and monitoring of a high pressure polymerization process using multivariate statistical techniques

    Science.gov (United States)

    Sharmin, Rumana

    This thesis explores the use of multivariate statistical techniques in developing tools for property modeling and monitoring of a high pressure ethylene polymerization process. In polymer industry, many researchers have shown, mainly in simulation studies, the potential of multivariate statistical methods in identification and control of polymerization process. However, very few, if any, of these strategies have been implemented. This work was done using data collected from a commercial high pressure LDPE/EVA reactor located at AT Plastics, Edmonton. The models or methods developed in the course of this research have been validated with real data and in most cases, implemented in real time. One main objective of this PhD project was to develop and implement a data based inferential sensor to estimate the melt flow index of LDPE and EVA resins using regularly measured process variables. Steady state PLS method was used to develop the soft sensor model. A detailed description of the data preprocessing steps are given that should be followed in the analysis of industrial data. Models developed for two of the most frequently produced polymer grades at AT Plastics have been implemented. The models were tested for many sets of data and showed acceptable performance when applied with an online bias updating scheme. One observation from many validation exercises was that the model prediction becomes poorer with time as operators use new process conditions in the plant to produce the same resin with the same specification. During the implementation of the soft sensors, we suggested a simple bias update scheme as a remedy to this problem. An alternative and more rigorous approach is to recursively update the model with new data, which is also more suitable to handle grade transition. Two existing recursive PLS methods, one based on NIPALS algorithm and the other based on kernel algorithm were reviewed. In addition, we proposed a novel RPLS algorithm which is based on the

  2. Numerical modelling of CO2 migration in saline reservoirs using geoelectric and seismic techniques - first results

    Science.gov (United States)

    Hagrey, S. A. Al; Strahser, M. H. P.; Rabbel, W.

    2009-04-01

    The research project "CO2 MoPa" (modelling and parameterisation of CO2 storage in deep saline formations for dimensions and risk analysis) has been initiated in 2008 by partners from different disciplines (e.g. geology, hydrogeology, geochemistry, geophysics, geomechanics, hydraulic engineering and law). It deals with the parameterisation of virtual subsurface storage sites to characterise rock properties, with high pressure-temperature experiments to determine in situ hydro-petrophysical and mechanical parameters, and with modelling of processes related to CCS in deep saline reservoirs. One objective is the estimation of the sensitivity and the resolution of reflection seismic and geoelectrical time-lapse measurements in order to determine the underground distribution of CO2. Compared with seismic, electric resistivity tomography (ERT) has lower resolution, but its permanent installation and continuous monitoring can make it an economical alternative or complement. Seismic and ERT (in boreholes) applications to quantify changes of intrinsic aquifers properties with time are justified by the velocity and resistivity decrease related to CO2 injection. Our numerical 2D/3D modelling reveals the capability of the techniques to map CO2 plumes and changes as a function of thickness, concentration, receiver/electrode configuration, aspect ratio and modelling and inversion constraint parameters. Depending on these factors, some configurations are favoured due to their better spatial resolution and lower artefacts. Acknowledgements This work has been carried out in the framework of "CO2 MoPa" research project funded by the Federal German Ministry of Education and Research (BMBF) and a consortium of energy companies (E.ON Energy, EnBW AG, RWE Dea AG, Stadtwerke Kiel AG, Vattenfall Europe Technology Research GmbH and Wintershall Holding AG).

  3. A novel technique of serial biopsy in mouse brain tumour models.

    Directory of Open Access Journals (Sweden)

    Sasha Rogers

    Full Text Available Biopsy is often used to investigate brain tumour-specific abnormalities so that treatments can be appropriately tailored. Dacomitinib (PF-00299804 is a tyrosine kinase inhibitor (TKI, which is predicted to only be effective in cancers where the targets of this drug (EGFR, ERBB2, ERBB4 are abnormally active. Here we describe a method by which serial biopsy can be used to validate response to dacomitinib treatment in vivo using a mouse glioblastoma model. In order to determine the feasibility of conducting serial brain biopsies in mouse models with minimal morbidity, and if successful, investigate whether this can facilitate evaluation of chemotherapeutic response, an orthotopic model of glioblastoma was used. Immunodeficient mice received cortical implants of the human glioblastoma cell line, U87MG, modified to express the constitutively-active EGFR mutant, EGFRvIII, GFP and luciferase. Tumour growth was monitored using bioluminescence imaging. Upon attainment of a moderate tumour size, free-hand biopsy was performed on a subgroup of animals. Animal monitoring using a neurological severity score (NSS showed that all mice survived the procedure with minimal perioperative morbidity and recovered to similar levels as controls over a period of five days. The technique was used to evaluate dacomitinib-mediated inhibition of EGFRvIII two hours after drug administration. We show that serial tissue samples can be obtained, that the samples retain histological features of the tumour, and are of sufficient quality to determine response to treatment. This approach represents a significant advance in murine brain surgery that may be applicable to other brain tumour models. Importantly, the methodology has the potential to accelerate the preclinical in vivo drug screening process.

  4. Application of magnetomechanical hysteresis modeling to magnetic techniques for monitoring neutron embrittlement and biaxial stress

    International Nuclear Information System (INIS)

    Sablik, M.J.; Kwun, H.; Rollwitz, W.L.; Cadena, D.

    1992-01-01

    The objective is to investigate experimentally and theoretically the effects of neutron embrittlement and biaxial stress on magnetic properties in steels, using various magnetic measurement techniques. Interaction between experiment and modeling should suggest efficient magnetic measurement procedures for determining neutron embrittlement biaxial stress. This should ultimately assist in safety monitoring of nuclear power plants and of gas and oil pipelines. In the first six months of this first year study, magnetic measurements were made on steel surveillance specimens from the Indian Point 2 and D.C. Cook 2 reactors. The specimens previously had been characterized by Charpy tests after specified neutron fluences. Measurements now included: (1) hysteresis loop measurement of coercive force, permeability and remanence, (2) Barkhausen noise amplitude; and (3) higher order nonlinear harmonic analysis of a 1 Hz magnetic excitation. Very good correlation of magnetic parameters with fluence and embrittlement was found for specimens from the Indian Point 2 reactor. The D.C. Cook 2 specimens, however showed poor correlation. Possible contributing factors to this are: (1) metallurgical differences between D.C. Cook 2 and Indian Point 2 specimens; (2) statistical variations in embrittlement parameters for individual samples away from the stated men values; and (3) conversion of the D.C. Cook 2 reactor to a low leakage core configuration in the middle of the period of surveillance. Modeling using a magnetomechanical hysteresis model has begun. The modeling will first focus on why Barkhausen noise and nonlinear harmonic amplitudes appear to be better indicators of embrittlement than the hysteresis loop parameters

  5. Modeling and simulation of PEM fuel cell's flow channels using CFD techniques

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, Edgar F.; Andrade, Alexandre B.; Robalinho, Eric; Bejarano, Martha L.M.; Linardi, Marcelo [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)]. E-mails: efcunha@ipen.br; abodart@ipen.br; eric@ipen.br; mmora@ipen.br; mlinardi@ipen.br; Cekinski, Efraim [Instituto de Pesquisas Tecnologicas (IPT-SP), Sao Paulo, SP (Brazil)]. E-mail: cekinski@ipt.br

    2007-07-01

    Fuel cells are one of the most important devices to obtain electrical energy from hydrogen. The Proton Exchange Membrane Fuel Cell (PEMFC) consists of two important parts: the Membrane Electrode Assembly (MEA), where the reactions occur, and the flow field plates. The plates have many functions in a fuel cell: distribute reactant gases (hydrogen and air or oxygen), conduct electrical current, remove heat and water from the electrodes and make the cell robust. The cost of the bipolar plates corresponds up to 45% of the total stack costs. The Computational Fluid Dynamic (CFD) is a very useful tool to simulate hydrogen and oxygen gases flow channels, to reduce the costs of bipolar plates production and to optimize mass transport. Two types of flow channels were studied. The first type was a commercial plate by ELECTROCELL and the other was entirely projected at Programa de Celula a Combustivel (IPEN/CNEN-SP) and the experimental data were compared with modelling results. Optimum values for each set of variables were obtained and the models verification was carried out in order to show the feasibility of this technique to improve fuel cell efficiency. (author)

  6. Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques

    Directory of Open Access Journals (Sweden)

    Nabila Khodeir

    2018-01-01

    Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook

  7. Simulation of Moving Loads in Elastic Multibody Systems With Parametric Model Reduction Techniques

    Directory of Open Access Journals (Sweden)

    Fischer Michael

    2014-08-01

    Full Text Available In elastic multibody systems, one considers large nonlinear rigid body motion and small elastic deformations. In a rising number of applications, e.g. automotive engineering, turning and milling processes, the position of acting forces on the elastic body varies. The necessary model order reduction to enable efficient simulations requires the determination of ansatz functions, which depend on the moving force position. For a large number of possible interaction points, the size of the reduced system would increase drastically in the classical Component Mode Synthesis framework. If many nodes are potentially loaded, or the contact area is not known a-priori and only a small number of nodes is loaded simultaneously, the system is described in this contribution with the parameter-dependent force position. This enables the application of parametric model order reduction methods. Here, two techniques based on matrix interpolation are described which transform individually reduced systems and allow the interpolation of the reduced system matrices to determine reduced systems for any force position. The online-offline decomposition and description of the force distribution onto the reduced elastic body are presented in this contribution. The proposed framework enables the simulation of elastic multibody systems with moving loads efficiently because it solely depends on the size of the reduced system. Results in frequency and time domain for the simulation of a thin-walled cylinder with a moving load illustrate the applicability of the proposed method.

  8. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  9. Application of predictive modelling techniques in industry: from food design up to risk assessment.

    Science.gov (United States)

    Membré, Jeanne-Marie; Lambert, Ronald J W

    2008-11-30

    In this communication, examples of applications of predictive microbiology in industrial contexts (i.e. Nestlé and Unilever) are presented which cover a range of applications in food safety from formulation and process design to consumer safety risk assessment. A tailor-made, private expert system, developed to support safe product/process design assessment is introduced as an example of how predictive models can be deployed for use by non-experts. Its use in conjunction with other tools and software available in the public domain is discussed. Specific applications of predictive microbiology techniques are presented relating to investigations of either growth or limits to growth with respect to product formulation or process conditions. An example of a probabilistic exposure assessment model for chilled food application is provided and its potential added value as a food safety management tool in an industrial context is weighed against its disadvantages. The role of predictive microbiology in the suite of tools available to food industry and some of its advantages and constraints are discussed.

  10. Prioritization of water management for sustainability using hydrologic simulation model and multicriteria decision making techniques.

    Science.gov (United States)

    Chung, Eun-Sung; Lee, Kil Seong

    2009-03-01

    The objective of this study is to develop an alternative evaluation index (AEI) in order to determine the priorities of a range of alternatives using both the hydrological simulation program in FORTRAN (HSPF) and multicriteria decision making (MCDM) techniques. In order to formulate the HSPF model, sensitivity analyses of water quantity (peak discharge and total volume) and quality (BOD peak concentrations and total loads) are conducted and a number of critical parameters were selected. To achieve a more precise simulation, the study watershed is divided into four regions for calibration and verification according to landuse, location, slope, and climate data. All evaluation criteria were selected using the Driver-Pressure-State-Impact-Response (DPSIR) model, a sustainability evaluation concept. The Analytic Hierarchy Process is used to estimate the weights of the criteria and the effects of water quantity and quality were quantified by HSPF simulation. In addition, AEIs that reflected residents' preferences for management objectives are proposed in order to induce the stakeholder to participate in the decision making process.

  11. QUEST: A model to quantify uncertain emergency search techniques, theory and application

    International Nuclear Information System (INIS)

    Johnson, M.M.; Goldsby, M.E.; Plantenga, T.D.; Wilcox, W.B.; Hensley, W.K.

    1996-01-01

    As recent world events show, criminal and terrorist access to nuclear materials is a growing national concern. The national laboratories are taking the lead in developing technologies to counter these potential threats to our national security. Sandia National Laboratories, with support from Pacific Northwest Laboratory and the Remote Sensing Laboratory, has developed QUEST (a model to Quantify Uncertain Emergency Search Techniques), to enhance the performance of organizations in the search for lost or stolen nuclear material. In addition, QUEST supports a wide range of other applications, such as environmental monitoring, nuclear facilities inspections, and searcher training. QUEST simulates the search for nuclear materials and calculates detector response fro various source types and locations. The probability of detecting a radioactive source during a search is a function of many different variables. Through calculation of dynamic detector response, QUEST makes possible quantitative comparisons of various sensor technologies and search patterns. The QUEST model can be used to examine the impact of new detector technologies, explore alternative search concepts, and provide interactive search/inspector training

  12. Analysis of Composite Skin-Stiffener Debond Specimens Using a Shell/3D Modeling Technique and Submodeling

    Science.gov (United States)

    OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.

    2004-01-01

    The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.

  13. Evaluation of Computational Techniques for Parameter Estimation and Uncertainty Analysis of Comprehensive Watershed Models

    Science.gov (United States)

    Yen, H.; Arabi, M.; Records, R.

    2012-12-01

    The structural complexity of comprehensive watershed models continues to increase in order to incorporate inputs at finer spatial and temporal resolutions and simulate a larger number of hydrologic and water quality responses. Hence, computational methods for parameter estimation and uncertainty analysis of complex models have gained increasing popularity. This study aims to evaluate the performance and applicability of a range of algorithms from computationally frugal approaches to formal implementations of Bayesian statistics using Markov Chain Monte Carlo (MCMC) techniques. The evaluation procedure hinges on the appraisal of (i) the quality of final parameter solution in terms of the minimum value of the objective function corresponding to weighted errors; (ii) the algorithmic efficiency in reaching the final solution; (iii) the marginal posterior distributions of model parameters; (iv) the overall identifiability of the model structure; and (v) the effectiveness in drawing samples that can be classified as behavior-giving solutions. The proposed procedure recognize an important and often neglected issue in watershed modeling that solutions with minimum objective function values may not necessarily reflect the behavior of the system. The general behavior of a system is often characterized by the analysts according to the goals of studies using various error statistics such as percent bias or Nash-Sutcliffe efficiency coefficient. Two case studies are carried out to examine the efficiency and effectiveness of four Bayesian approaches including Metropolis-Hastings sampling (MHA), Gibbs sampling (GSA), uniform covering by probabilistic rejection (UCPR), and differential evolution adaptive Metropolis (DREAM); a greedy optimization algorithm dubbed dynamically dimensioned search (DDS); and shuffle complex evolution (SCE-UA), a widely implemented evolutionary heuristic optimization algorithm. The Soil and Water Assessment Tool (SWAT) is used to simulate hydrologic and

  14. Comparison of Flood Inundation Mapping Techniques between Different Modeling Approaches and Satellite Imagery

    Science.gov (United States)

    Zhang, J.; Munasinghe, D.; Huang, Y. F.; Lin, P.; Fang, N. Z.; Cohen, S.; Tsang, Y. P.

    2016-12-01

    Flood inundation extent serves as a crucial information source for both hydrologists and decision makers. Accurate and timely inundation mapping can potentially improve flood risk management and reduce flood damage. In this study, the authors applied two modeling approaches to estimate flood inundation area for a large flooding event that occurred in May 2016 in the Brazos River: The Height Above the Nearest Drainage combined with National Hydrograph Dataset (NHD-HAND) and the International River Interface Cooperative - Flow and Sediment Transport with Morphological Evolution of Channels (iRIC-FaSTMECH). NHD-HAND features a terrain model that simplifies the dynamic flood inundation mapping process while iRIC-FaSTMECH is a hydrodynamic model that simulates flood extent under quasi-steady approximation. In terms of data sources, HAND and iRIC utilized the National Water Model (NWM) output and the United States Geological Survey (USGS) stream gage data, respectively. The flood inundation extents generated from these two approaches were validated against Landsat 8 Satellite Imagery. Four remote sensing classification techniques were used to provide alternative observations: supervised, unsupervised, normalized difference water index and delta-cue change detection of water. According to the quantitative analysis that compares simulated areas with different remote sensing classifications, the advanced fitness index of iRIC simulation ranges from 57.5% to 69.9% while that of HAND ranges from 49.4% to 55.5%. We found that even though HAND better captures some details than iRIC in the inundation extent, it has problems in certain areas where subcatchments are not behaving independently, especially for extreme flooding events. The iRIC model performs better in this case, however, we cannot simply conclude iRIC is a better-suited approach than HAND considering the uncertainties in remote sensing observations and iRIC model parameters. Further research will include more

  15. Estimation of Actual Evapotranspiration Using an Agro-Hydrological Model and Remote Sensing Techniques

    Directory of Open Access Journals (Sweden)

    mostafa yaghoobzadeh

    2017-02-01

    Full Text Available Introduction: Accurate estimation of evapotranspiration plays an important role in quantification of water balance at awatershed, plain and regional scale. Moreover, it is important in terms ofmanaging water resources such as water allocation, irrigation management, and evaluating the effects of changing land use on water yields. Different methods are available for ET estimation including Bowen ratio energy balance systems, eddy correlation systems, weighing lysimeters.Water balance techniques offer powerful alternatives for measuring ET and other surface energy fluxes. In spite of the elegance, high accuracy and theoretical attractions of these techniques for measuring ET, their practical use over large areas might be limited. They can be very expensive for practical applications at regional scales under heterogeneous terrains composed of different agro-ecosystems. To overcome aforementioned limitations by use of satellite measurements are appropriate approach. The feasibility of using remotely sensed crop parameters in combination of agro-hydrological models has been investigated in recent studies. The aim of the present study was to determine evapotranspiration by two methods, remote sensing and soil, water, atmosphere, and plant (SWAP model for wheat fields located in Neishabour plain. The output of SWAP has been validated by means of soil water content measurements. Furthermore, the actual evapotranspiration estimated by SWAP has been considered as the “reference” in the comparison between SEBAL energy balance models. Materials and Methods: Surface Energy Balance Algorithm for Land (SEBAL was used to estimate actual ET fluxes from Modis satellite images. SEBAL is a one-layer energy balance model that estimates latent heat flux and other energy balance components without information on soil, crop, and management practices. The near surface energy balance equation can be approximated as: Rn = G + H + λET Where Rn: net radiation (Wm2; G

  16. Development of asymptotic models in ultrasonic non destructive techniques (NDT): elastic waves interaction with geometrical irregularities and head waves modeling

    International Nuclear Information System (INIS)

    Ferrand, Adrien

    2014-01-01

    The head wave is the first arrival wave received during a TOFD (Time Of Flight Diffraction) inspection. The TOFD technique is a classical ultrasonic NDT (Non Destructive Testing) inspection method employing two piezoelectric transducers which are symmetrically placed facing each other with a constant spacing above the inspected specimen surface. The head wave propagation along an irregular entry surface is shown by a numerical study to be not only a surface propagation phenomenon, as for the plane surface case, but also involves a bulk propagation phenomenon caused by diffractions of the ultrasonic wave field on the surface irregularities. In order to model theses phenomena, a generic ray tracing method based on the generalized Fermat's principle has been developed and establishes the effective path of any ultrasonic propagating wave in a specimen of irregular surface, notably including the effective head wave path. The diffraction phenomena evaluation by amplitude models using a ray approach allows to provide a complete simulation (time of flight, wave front and amplitude) of the head wave for numerous kinds of surface irregularity. Theoretical and experimental validations of the developed simulation tool have been carried out and have proven successful. (author) [fr

  17. Managing Software Project Risks (Analysis Phase) with Proposed Fuzzy Regression Analysis Modelling Techniques with Fuzzy Concepts

    OpenAIRE

    Elzamly, Abdelrafe; Hussin, Burairah

    2014-01-01

    The aim of this paper is to propose new mining techniques by which we can study the impact of different risk management techniques and different software risk factors on software analysis development projects. The new mining technique uses the fuzzy multiple regression analysis techniques with fuzzy concepts to manage the software risks in a software project and mitigating risk with software process improvement. Top ten software risk factors in analysis phase and thirty risk management techni...

  18. The Adsorption of Cd(II on Manganese Oxide Investigated by Batch and Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Xiaoming Huang

    2017-09-01

    Full Text Available Manganese (Mn oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R2 > 0.999. The adsorption of Cd(II on Mn oxide significantly decreased with increasing ionic strength at pH < 5.0, whereas Cd(II adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II on Mn oxide at pH < 5.0 and pH > 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II on Mn oxide can be satisfactorily simulated by ion exchange sites (X2Cd at low pH and inner-sphere surface complexation sites (SOCd+ and (SO2CdOH− species at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water–mineral interface.

  19. The Adsorption of Cd(II) on Manganese Oxide Investigated by Batch and Modeling Techniques.

    Science.gov (United States)

    Huang, Xiaoming; Chen, Tianhu; Zou, Xuehua; Zhu, Mulan; Chen, Dong; Pan, Min

    2017-09-28

    Manganese (Mn) oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II) on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II) concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R² > 0.999). The adsorption of Cd(II) on Mn oxide significantly decreased with increasing ionic strength at pH adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II) on Mn oxide at pH 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II) calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II) on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by ion exchange sites (X₂Cd) at low pH and inner-sphere surface complexation sites (SOCd⁺ and (SO)₂CdOH - species) at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water-mineral interface.

  20. Environmental modelling of Omerli catchment area in Istanbul, Turkey using remote sensing and GIS techniques.

    Science.gov (United States)

    Coskun, H Gonca; Alparslan, Erhan

    2009-06-01

    Omerli Reservoir is one of the major drinking water reservoirs of Greater Metropolis Istanbul, providing 40% of the overall water demand. Istanbul where is one of the greatest metropolitan areas of the world with a population over 10 million and a rate of population increase about twice that of Turkey. As a result of population growth and industrial development, Omerli watershed is highly affected by the wastewater discharges from the residential areas and industrial plants. The main objective of this study is to investigate the temporal assessment of the land-use/cover of the Omerli Watershed and the water quality changes in the Reservoir. It is not possible to adequately control urbanization and other pollution sources affecting the water quality. Responses of these detrimental effects are due to rapidly increasing population, unplanned and illegal housing, and irrelevant industries at the protection zones of the watershed, together with insufficient infrastructure. The study is focused on the assessment of urbanization in relation to land use and water quality using Remote Sensing (RS) and Geographic Information Systems (GIS) techniques for all the four protection zones of the Reservoir and a time variant analyzing model is obtained. IRS-1C LISS and IRS-1C PAN, LANDSAT-5 TM satellite data of 1997, 1998, 2000, 2001 and 2006 are analyzed by confirmation through the ground truth data. RS data have been transferred into UTM coordinate system and image enhancement and classification techniques were used. Raster data were converted to vector data that belongs to study area to analyze in GIS for the purpose of planning and decision-making on protected watersheds.

  1. Application of Penalized Regression Techniques in Modelling Insulin Sensitivity by Correlated Metabolic Parameters.

    Directory of Open Access Journals (Sweden)

    Christian S Göbl

    Full Text Available This paper aims to introduce penalized estimation techniques in clinical investigations of diabetes, as well as to assess their possible advantages and limitations. Data from a previous study was used to carry out the simulations to assess: a which procedure results in the lowest prediction error of the final model in the setting of a large number of predictor variables with high multicollinearity (of importance if insulin sensitivity should be predicted and b which procedure achieves the most accurate estimate of regression coefficients in the setting of fewer predictors with small unidirectional effects and moderate correlation between explanatory variables (of importance if the specific relation between an independent variable and insulin sensitivity should be examined. Moreover a special focus is on the correct direction of estimated parameter effects, a non-negligible source of error and misinterpretation of study results. The simulations were performed for varying sample size to evaluate the performance of LASSO, Ridge as well as different algorithms for Elastic Net. These methods were also compared with automatic variable selection procedures (i.e. optimizing AIC or BIC.We were not able to identify one method achieving superior performance in all situations. However, the improved accuracy of estimated effects underlines the importance of using penalized regression techniques in our example (e.g. if a researcher aims to compare relations of several correlated parameters with insulin sensitivity. However, the decision which procedure should be used depends on the specific context of a study (accuracy versus complexity and moreover should involve clinical prior knowledge.

  2. The potential of cell sheet technique on the development of hepatocellular carcinoma in rat models.

    Directory of Open Access Journals (Sweden)

    Alaa T Alshareeda

    Full Text Available Hepatocellular carcinoma (HCC is considered the 3rd leading cause of death by cancer worldwide with the majority of patients were diagnosed in the late stages. Currently, there is no effective therapy. The selection of an animal model that mimics human cancer is essential for the identification of prognostic/predictive markers, candidate genes underlying cancer induction and the examination of factors that may influence the response of cancers to therapeutic agents and regimens. In this study, we developed a HCC nude rat models using cell sheet and examined the effect of human stromal cells (SCs on the development of the HCC model and on different liver parameters such as albumin and urea.Transplanted cell sheet for HCC rat models was fabricated using thermo-responsive culture dishes. The effect of human umbilical cord mesenchymal stromal cells (UC-MSCs and human bone marrow mesenchymal stromal cells (BM-MSCs on the developed tumour was tested. Furthermore, development of tumour and detection of the liver parameter was studied. Additionally, angiogenesis assay was performed using Matrigel.HepG2 cells requires five days to form a complete cell sheet while HepG2 co-cultured with UC-MSCs or BM-MSCs took only three days. The tumour developed within 4 weeks after transplantation of the HCC sheet on the liver of nude rats. Both UC-MSCs and BM-MSCs improved the secretion of liver parameters by increasing the secretion of albumin and urea. Comparatively, the UC-MSCs were more effective than BM-MSCs, but unlike BM-MSCs, UC-MSCs prevented liver tumour formation and the tube formation of HCC.Since this is a novel study to induce liver tumour in rats using hepatocellular carcinoma sheet and stromal cells, the data obtained suggest that cell sheet is a fast and easy technique to develop HCC models as well as UC-MSCs have therapeutic potential for liver diseases. Additionally, the data procured indicates that stromal cells enhanced the fabrication of HepG2

  3. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    Science.gov (United States)

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  4. Diagnostic accuracy: theoretical models for preimplantation genetic testing of a single nucleus using the fluorescence in situ hybridization technique

    NARCIS (Netherlands)

    Scriven, P. N.; Bossuyt, P. M. M.

    2010-01-01

    The aim of this study was to develop and use theoretical models to investigate the accuracy of the fluorescence in situ hybridization (FISH) technique in testing a single nucleus from a preimplantation embryo without the complicating effect of mosaicism. Mathematical models were constructed for

  5. Finite element models of the human shoulder complex: a review of their clinical implications and modelling techniques.

    Science.gov (United States)

    Zheng, Manxu; Zou, Zhenmin; Bartolo, Paulo Jorge Da Silva; Peach, Chris; Ren, Lei

    2017-02-01

    The human shoulder is a complicated musculoskeletal structure and is a perfect compromise between mobility and stability. The objective of this paper is to provide a thorough review of previous finite element (FE) studies in biomechanics of the human shoulder complex. Those FE studies to investigate shoulder biomechanics have been reviewed according to the physiological and clinical problems addressed: glenohumeral joint stability, rotator cuff tears, joint capsular and labral defects and shoulder arthroplasty. The major findings, limitations, potential clinical applications and modelling techniques of those FE studies are critically discussed. The main challenges faced in order to accurately represent the realistic physiological functions of the shoulder mechanism in FE simulations involve (1) subject-specific representation of the anisotropic nonhomogeneous material properties of the shoulder tissues in both healthy and pathological conditions; (2) definition of boundary and loading conditions based on individualised physiological data; (3) more comprehensive modelling describing the whole shoulder complex including appropriate three-dimensional (3D) representation of all major shoulder hard tissues and soft tissues and their delicate interactions; (4) rigorous in vivo experimental validation of FE simulation results. Fully validated shoulder FE models would greatly enhance our understanding of the aetiology of shoulder disorders, and hence facilitate the development of more efficient clinical diagnoses, non-surgical and surgical treatments, as well as shoulder orthotics and prosthetics. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd. © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.

  6. Evaluating the effects of climate changes on Patos Lagoon's hydrodynamics using numerical modeling techniques, Brazil

    Science.gov (United States)

    Barros, G. P.; Marques, W. C.

    2013-05-01

    coastal region. The simulations were performed with the length of one year and the results were analyzed using a comparative way. The results are still preliminary but clear differences between both simulations can be seen. Differences in current velocities, salinity and water levels along the coast and the estuary can be observed on the results. Wavelet techniques will be applied in order to study time series temporal variability and to filter the time series, as well as obtain linear and non-linear tendencies that will be further inserted in the numerical model to simulate future scenarios. The study of the influence of freshwater discharge in the dynamics of coastal regions at long period timescales is not easily accomplished, since data series that are necessary are often not available for the region. Therefore, it is necessary to develop alternative methodologies to enable long-term studies of the Patos Lagoon, using preterit data and numerical modeling techniques. This methodologies will allow to evaluate the impact of climate changes on the region with a low operational cost.

  7. MODELLING THE DELAMINATION FAILURE ALONG THE CFRP-CFST BEAM INTERACTION SURFACE USING DIFFERENT FINITE ELEMENT TECHNIQUES

    Directory of Open Access Journals (Sweden)

    AHMED W. AL-ZAND

    2017-01-01

    Full Text Available Nonlinear finite element (FE models are prepared to investigate the behaviour of concrete-filled steel tube (CFST beams strengthened by carbon fibre reinforced polymer (CFRP sheets. The beams are strengthened from the bottom side only by varied sheet lengths (full and partial beam lengths and then subjected to ultimate flexural loads. Three surface interaction techniques are used to implement the bonding behaviour between the steel tube and the CFRP sheet, namely, full tie interaction (TI, cohesive element (CE and cohesive behaviour (CB techniques using ABAQUS software. Results of the comparison between the FE analysis and existing experimental study confirm that the FE models with the TI technique could be applicable for beams strengthened by CFRP sheets with a full wrapping length; the technique could not accurately implement the CFRP delamination failure, which occurred for beams with a partial wrapping length. Meanwhile, the FE models with the CE and CB techniques are applicable in the implementation of both CFRP failures (rapture and delamination for both full and partial wrapping lengths, respectively. Where, the ultimate loads' ratios achieved by the FE models using TI, CE and CB techniques about 1.122, 1.047 and 1.045, respectively, comparing to the results of existing experimental tests.

  8. Validation of a COMSOL Multiphysics based soil model using imaging techniques

    Science.gov (United States)

    Hayes, Robert; Newill, Paul; Podd, Frank; Dorn, Oliver; York, Trevor; Grieve, Bruce

    2010-05-01

    In the face of climate change the ability to rapidly identify new plant varieties that will be tolerant to drought, and other stresses, is going to be key to breeding the food crops of tomorrow. Currently, above soil features (phenotypes) are monitored in industrial greenhouses and field trials during seed breeding programmes so as to provide an indication of which plants have the most likely preferential genetics to thrive in the future global environments. These indicators of 'plant vigour' are often based on loosely related features which may be straightforward to examine, such as an additional ear of corn on a maize plant, but which are labour intensive and often lacking in direct linkage to the required crop features. A new visualisation tool is being developed for seed breeders, providing on-line data for each individual plant in a screening programme indicating how efficiently each plant utilises the water and nutrients available in the surrounding soil. It will be used as an in-field tool for early detection of desirable genetic traits with the aim of increased efficiency in identification and delivery of tomorrow's drought tolerant food crops. Visualisation takes the form of Electrical Impedance Tomography (EIT), a non-destructive and non-intrusive imaging technique. The measurement space is typical of medical and industrial process monitoring i.e. on a small spatial scale as opposed to that of typical geophysical applications. EIT measurements are obtained for an individual plant thus allowing water and nutrient absorption levels for an individual specimen to be inferred from the resistance distribution image obtained. In addition to traditional soft-field image reconstruction techniques the inverse problem is solved using mathematical models for the mobility of water and solutes in soil. The University of Manchester/Syngenta LCT2 (Low Cost Tomography 2) instrument has been integrated into crop growth studies under highly controlled soil, nutrient and

  9. Towards representing human behavior and decision making in Earth system models - an overview of techniques and approaches

    Science.gov (United States)

    Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-11-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.

  10. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    Science.gov (United States)

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Finite element model updating using the shadow hybrid Monte Carlo technique

    Science.gov (United States)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  12. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    Science.gov (United States)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  13. Spatial Air Quality Modelling Using Chemometrics Techniques: A Case Study in Peninsular Malaysia

    International Nuclear Information System (INIS)

    Azman Azid; Hafizan Juahir; Mohammad Azizi Amran; Zarizal Suhaili; Mohamad Romizan Osman; Asyaari Muhamad; Asyaari Muhamad; Ismail Zainal Abidin; Nur Hishaam Sulaiman; Ahmad Shakir Mohd Saudi

    2015-01-01

    This study shows the effectiveness of hierarchical agglomerative cluster analysis (HACA), discriminant analysis (DA), principal component analysis (PCA), and multiple linear regressions (MLR) for assessment of air quality data and recognition of air pollution sources. 12 months data (January-December 2007) consisting of 14 stations in Peninsular Malaysia with 14 parameters were applied. Three significant clusters - low pollution source (LPS), moderate pollution source (MPS), and slightly high pollution source (SHPS) were generated via HACA. Forward stepwise of DA managed to discriminate eight variables, whereas backward stepwise of DA managed to discriminate nine variables out of fourteen variables. The PCA and FA results show the main contributor of air pollution in Peninsular Malaysia is the combustion of fossil fuel from industrial activities, transportation and agriculture systems. Four MLR models show that PM 10 account as the most and the highest pollution contributor to Malaysian air quality. From the study, it can be stipulated that the application of chemometrics techniques can disclose meaningful information on the spatial variability of a large and complex air quality data. A clearer review about the air quality and a novelty design of air quality monitoring network for better management of air pollution can be achieved via these methods. (author)

  14. The effects of soil-structure interaction modeling techniques on in-structure response spectra

    International Nuclear Information System (INIS)

    Johnson, J.J.; Wesley, D.A.; Almajan, I.T.

    1977-01-01

    The structure considered for this investigation consisted of the reactor containment building (RCB) and prestressed concrete reactor vessel (PCRV) for a HTGR plant. A conventional lumped-mass dynamic model in three dimensions was used in the study. The horizontal and vertical response, which are uncoupled due to the symmetry of the structure, were determined for horizontal and vertical excitation. Five different site conditions ranging from competent rock to a soft soil site were considered. The simplified approach to the overall plant analysis utilized stiffness proportional composite damping with a limited amount of soil damping consistent with US NRC regulatory guidelines. Selected cases were also analyzed assuming a soil damping value approximating the theoretical value. The results from the simplified approach were compared to those determined by rigorously coupling the structure to a frequency independent half-space representation of the soil. Finally, equivalent modal damping ratios were found by matching the frequency response at a point within the coupled soil-structure system determined by solution of the coupled and uncoupled equations of motion. The basis for comparison of the aforementioned techniques was the response spectra at selected locations within the soil-structure system. Each of the five site conditions was analyzed and in-structure response spectra were generated. The response spectra were combined to form a design envelope which encompasses the entire range of site parameters. Both the design envelopes and the site-by-site results were compared

  15. Coronary stent on coronary CT angiography: Assessment with model-based iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-05-15

    To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.

  16. Filament Breakage Monitoring in Fused Deposition Modeling Using Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Zhensheng Yang

    2018-03-01

    Full Text Available Polymers are being used in a wide range of Additive Manufacturing (AM applications and have been shown to have tremendous potential for producing complex, individually customized parts. In order to improve part quality, it is essential to identify and monitor the process malfunctions of polymer-based AM. The present work endeavored to develop an alternative method for filament breakage identification in the Fused Deposition Modeling (FDM AM process. The Acoustic Emission (AE technique was applied due to the fact that it had the capability of detecting bursting and weak signals, especially from complex background noises. The mechanism of filament breakage was depicted thoroughly. The relationship between the process parameters and critical feed rate was obtained. In addition, the framework of filament breakage detection based on the instantaneous skewness and relative similarity of the AE raw waveform was illustrated. Afterwards, we conducted several filament breakage tests to validate their feasibility and effectiveness. Results revealed that the breakage could be successfully identified. Achievements of the present work could be further used to develop a comprehensive in situ FDM monitoring system with moderate cost.

  17. Histologic Comparison of Vibrating Guidewire with Conventional Guidewire Technique in an Experimental Coronary In Vivo Model

    International Nuclear Information System (INIS)

    Katsouras, Christos S.; Michalis, Lampros K.; Malamou-Mitsi, Vassiliki D.; Niokou, Demetra; Giogiakas, Vassilios; Nikas, Dimitrios; Massouras, Gerasimos; Dallas, Pavlos; Tsetis, Dimitrios K.; Sideris, Dimitris A.; Rees, Michael R.

    2003-01-01

    Purpose: To compare the damage caused by vibrating guidewire manipulation and conventional guidewire manipulation of soft coronary wires in normal sheep coronary arteries. Methods: Using an intact sheep model the two methods of passing a coronary guidewire down a normal coronary artery under fluoroscopic screening control were studied. The resulting arterial damage caused by the two techniques was studied histologically. The severity of damage was scored from 1 (no damage) to 4 (severe damage) and expressed as: (a) percentage of damaged sections, (b) mean damage score per section and (c) percentage of sections suffering the most severe degree of damage (scores 3 and 4). Results: One hundred and sixty-eight sections were studied.The percentage of damaged sections was lower in the vibrating guidewire group (p 0.004). The mean damage score and the percentage of sections with a damage score of 3 or 4 were smaller in the vibrating guidewire group than in the conventional guidewire manipulation group (p = 0.001 and p =0.009, respectively). Conclusions: Both methods of guidewire manipulation cause identifiable vascular damage. The extent and severity of damage appear greater when the guidewire is manipulated manually

  18. Mapping Cropland in Smallholder-Dominated Savannas: Integrating Remote Sensing Techniques and Probabilistic Modeling

    Directory of Open Access Journals (Sweden)

    Sean Sweeney

    2015-11-01

    Full Text Available Traditional smallholder farming systems dominate the savanna range countries of sub-Saharan Africa and provide the foundation for the region’s food security. Despite continued expansion of smallholder farming into the surrounding savanna landscapes, food insecurity in the region persists. Central to the monitoring of food security in these countries, and to understanding the processes behind it, are reliable, high-quality datasets of cultivated land. Remote sensing has been frequently used for this purpose but distinguishing crops under certain stages of growth from savanna woodlands has remained a major challenge. Yet, crop production in dryland ecosystems is most vulnerable to seasonal climate variability, amplifying the need for high quality products showing the distribution and extent of cropland. The key objective in this analysis is the development of a classification protocol for African savanna landscapes, emphasizing the delineation of cropland. We integrate remote sensing techniques with probabilistic modeling into an innovative workflow. We present summary results for this methodology applied to a land cover classification of Zambia’s Southern Province. Five primary land cover categories are classified for the study area, producing an overall map accuracy of 88.18%. Omission error within the cropland class is 12.11% and commission error 9.76%.

  19. CuFeO2 formation using fused deposition modeling 3D printing and sintering technique

    Science.gov (United States)

    Salea, A.; Dasaesamoh, A.; Prathumwan, R.; Kongkaew, T.; Subannajui, K.

    2017-09-01

    CuFeO2 is a metal oxide mineral material which is called delafossite. It can potentially be used as a chemical catalyst, and gas sensing material. There are methods to fabricate CuFeO2 such as chemical synthesis, sintering, sputtering, and chemical vapor deposition. In our work, CuFeO2 is prepared by Fused Deposition Modeling (FDM) 3D printing. The composite filament which composed of Cu and Fe elements is printed in three dimensions, and then sintered and annealed at high temperature to obtain CuFeO2. Suitable polymer blend and maximum percent volume of metal powder are studied. When percent volume of metal powder is increased, melt flow rate of polymer blend is also increased. The most suitable printing condition is reported and the properties of CuFeO2 are observed by Scanning Electron Microscopy, and Dynamic Scanning Calorimeter, X-ray diffraction. As a new method to produce semiconductor, this technique has a potential to allow any scientist or students to design and print a catalyst or sensing material by the most conventional 3D printing machine which is commonly used around the world.

  20. Using a business model approach and marketing techniques for recruitment to clinical trials.

    Science.gov (United States)

    McDonald, Alison M; Treweek, Shaun; Shakur, Haleema; Free, Caroline; Knight, Rosemary; Speed, Chris; Campbell, Marion K

    2011-03-11

    Randomised controlled trials (RCTs) are generally regarded as the gold standard for evaluating health care interventions. The level of uncertainty around a trial's estimate of effect is, however, frequently linked to how successful the trial has been in recruiting and retaining participants. As recruitment is often slower or more difficult than expected, with many trials failing to reach their target sample size within the timescale and funding originally envisaged, the results are often less reliable than they could have been. The high number of trials that require an extension to the recruitment period in order to reach the required sample size potentially delays the introduction of more effective therapies into routine clinical practice. Moreover, it may result in less research being undertaken as resources are redirected to extending existing trials rather than funding additional studies.Poor recruitment to publicly-funded RCTs has been much debated but there remains remarkably little clear evidence as to why many trials fail to recruit well, which recruitment methods work, in which populations and settings and for what type of intervention. One proposed solution to improving recruitment and retention is to adopt methodology from the business world to inform and structure trial management techniques.We review what is known about interventions to improve recruitment to trials. We describe a proposed business approach to trials and discuss the implementation of using a business model, using insights gained from three case studies.

  1. Qualitative and quantitative changes in phospholipids and proteins investigated by spectroscopic techniques in animal depression model

    Science.gov (United States)

    Depciuch, J.; Sowa-Kucma, M.; Nowak, G.; Papp, M.; Gruca, P.; Misztak, P.; Parlinska-Wojtan, M.

    2017-04-01

    Depression becomes nowadays a high mortality civilization disease with one of the major causes being chronic stress. Raman, Fourier Transform Infra Red (FTIR) and Ultraviolet-Visible (UV-vis) spectroscopies were used to determine the changes in the quantity and structure of phospholipids and proteins in the blood serum of rats subjected to chronic mild stress, which is a common animal depression model. Moreover, the efficiency of the imipramine treatment was evaluated. It was found that chronic mild stress not only damages the structure of the phospholipids and proteins, but also decreases their level in the blood serum. A 5 weeks imipramine treatment did increase slightly the quantity of proteins, leaving the damaged phospholipids unchanged. Structural information from phospholipids and proteins was obtained by UV-vis spectroscopy combined with the second derivative of the FTIR spectra. Indeed, the structure of proteins in blood serum of stressed rats was normalized after imipramine therapy, while the impaired structure of phospholipids remained unaffected. These findings strongly suggest that the depression factor, which is chronic mild stress, may induce permanent (irreversible) damages into the phospholipid structure identified as shortened carbon chains. This study shows a possible new application of spectroscopic techniques in the diagnosis and therapy monitoring of depression.

  2. Monte Carlo Technique Used to Model the Degradation of Internal Spacecraft Surfaces by Atomic Oxygen

    Science.gov (United States)

    Banks, Bruce A.; Miller, Sharon K.

    2004-01-01

    Atomic oxygen is one of the predominant constituents of Earth's upper atmosphere. It is created by the photodissociation of molecular oxygen (O2) into single O atoms by ultraviolet radiation. It is chemically very reactive because a single O atom readily combines with another O atom or with other atoms or molecules that can form a stable oxide. The effects of atomic oxygen on the external surfaces of spacecraft in low Earth orbit can have dire consequences for spacecraft life, and this is a well-known and much studied problem. Much less information is known about the effects of atomic oxygen on the internal surfaces of spacecraft. This degradation can occur when openings in components of the spacecraft exterior exist that allow the entry of atomic oxygen into regions that may not have direct atomic oxygen attack but rather scattered attack. Openings can exist because of spacecraft venting, microwave cavities, and apertures for Earth viewing, Sun sensors, or star trackers. The effects of atomic oxygen erosion of polymers interior to an aperture on a spacecraft were simulated at the NASA Glenn Research Center by using Monte Carlo computational techniques. A two-dimensional model was used to provide quantitative indications of the attenuation of atomic oxygen flux as a function of the distance into a parallel-walled cavity. The model allows the atomic oxygen arrival direction, the Maxwell Boltzman temperature, and the ram energy to be varied along with the interaction parameters of the degree of recombination upon impact with polymer or nonreactive surfaces, the initial reaction probability, the reaction probability dependence upon energy and angle of attack, degree of specularity of scattering of reactive and nonreactive surfaces, and the degree of thermal accommodation upon impact with reactive and non-reactive surfaces to be varied to allow the model to produce atomic oxygen erosion geometries that replicate actual experimental results from space. The degree of

  3. Application of the Shell/3D Modeling Technique for the Analysis of Skin-Stiffener Debond Specimens

    Science.gov (United States)

    Krueger, Ronald; O'Brien, T. Kevin; Minguet, Pierre J.

    2002-01-01

    The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/13D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.

  4. Novel three-step pseudo-absence selection technique for improved species distribution modelling.

    Directory of Open Access Journals (Sweden)

    Senait D Senay

    Full Text Available Pseudo-absence selection for spatial distribution models (SDMs is the subject of ongoing investigation. Numerous techniques continue to be developed, and reports of their effectiveness vary. Because the quality of presence and absence data is key for acceptable accuracy of correlative SDM predictions, determining an appropriate method to characterise pseudo-absences for SDM's is vital. The main methods that are currently used to generate pseudo-absence points are: 1 randomly generated pseudo-absence locations from background data; 2 pseudo-absence locations generated within a delimited geographical distance from recorded presence points; and 3 pseudo-absence locations selected in areas that are environmentally dissimilar from presence points. There is a need for a method that considers both geographical extent and environmental requirements to produce pseudo-absence points that are spatially and ecologically balanced. We use a novel three-step approach that satisfies both spatial and ecological reasons why the target species is likely to find a particular geo-location unsuitable. Step 1 comprises establishing a geographical extent around species presence points from which pseudo-absence points are selected based on analyses of environmental variable importance at different distances. This step gives an ecologically meaningful explanation to the spatial range of background data, as opposed to using an arbitrary radius. Step 2 determines locations that are environmentally dissimilar to the presence points within the distance specified in step one. Step 3 performs K-means clustering to reduce the number of potential pseudo-absences to the desired set by taking the centroids of clusters in the most environmentally dissimilar class identified in step 2. By considering spatial, ecological and environmental aspects, the three-step method identifies appropriate pseudo-absence points for correlative SDMs. We illustrate this method by predicting the New

  5. Evaluation of blackbody radiation emitted by arbitrarily shaped bodies using the source model technique.

    Science.gov (United States)

    Sister, Ilya; Leviatan, Yehuda; Schächter, Levi

    2017-06-12

    Planck's famous blackbody radiation law was derived under the assumption that the dimensions of the radiating body are significantly larger than the radiated wavelengths. What is unique about Planck's formula is the fact that it is independent of the exact loss mechanism and the geometry. Therefore, for a long period of time, it was regarded as a fundamental property of all materials. Deviations from its predictions were attributed to imperfections and referred to as the emissivity of the specific body, a quantity which was always assumed to be smaller than unity. Recent studies showed that the emission spectrum is affected by the geometry of the body and in fact, in a limited frequency range, the emitted spectrum may exceed Planck's prediction provided the typical size of the body is of the same order of magnitude as the emitted wavelength. For the investigation of the blackbody radiation from an arbitrarily shaped body, we developed a code which incorporates the fluctuation-dissipation theorem (FDT) and the source model technique (SMT). The former determines the correlation between the quasi-microscopic current densities in the body and the latter is used to solve the electromagnetic problem numerically. In this study we present the essence of combining the two concepts. We verify the validity of our code by comparing its results obtained for the case of a sphere against analytic results and discuss how the accuracy of the solution is assessed in the general case. Finally, we illustrate several configurations in which the emitted spectrum exceeds Planck's prediction as well as cases in which the geometrical resonances of the body are revealed.

  6. Continuous Modeling Technique of Fiber Pullout from a Cement Matrix with Different Interface Mechanical Properties Using Finite Element Program

    Directory of Open Access Journals (Sweden)

    Leandro Ferreira Friedrich

    Full Text Available Abstract Fiber-matrix interface performance has a great influence on the mechanical properties of fiber reinforced composite. This influence is mainly presented during fiber pullout from the matrix. As fiber pullout process consists of fiber debonding stage and pullout stage which involve complex contact problem, numerical modeling is a best way to investigate the interface influence. Although many numerical research works have been conducted, practical and effective technique suitable for continuous modeling of fiber pullout process is still scarce. The reason is in that numerical divergence frequently happens, leading to the modeling interruption. By interacting the popular finite element program ANSYS with the MATLAB, we proposed continuous modeling technique and realized modeling of fiber pullout from cement matrix with desired interface mechanical performance. For debonding process, we used interface elements with cohesive surface traction and exponential failure behavior. For pullout process, we switched interface elements to spring elements with variable stiffness, which is related to the interface shear stress as a function of the interface slip displacement. For both processes, the results obtained are very good in comparison with other numerical or analytical models and experimental tests. We suggest using the present technique to model toughening achieved by randomly distributed fibers.

  7. Comparison of extraction techniques and modeling of accelerated solvent extraction for the authentication of natural vanilla flavors.

    Science.gov (United States)

    Cicchetti, Esmeralda; Chaintreau, Alain

    2009-06-01

    Accelerated solvent extraction (ASE) of vanilla beans has been optimized using ethanol as a solvent. A theoretical model is proposed to account for this multistep extraction. This allows the determination, for the first time, of the total amount of analytes initially present in the beans and thus the calculation of recoveries using ASE or any other extraction technique. As a result, ASE and Soxhlet extractions have been determined to be efficient methods, whereas recoveries are modest for maceration techniques and depend on the solvent used. Because industrial extracts are obtained by many different procedures, including maceration in various solvents, authenticating vanilla extracts using quantitative ratios between the amounts of vanilla flavor constituents appears to be unreliable. When authentication techniques based on isotopic ratios are used, ASE is a valid sample preparation technique because it does not induce isotopic fractionation.

  8. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  9. Application of geo-spatial techniques and cellular automata for modelling urban growth of a heterogeneous urban fringe

    OpenAIRE

    Mahesh Kumar Jat; Mahender Choudhary; Ankita Saxena

    2017-01-01

    Urban growth monitoring and assessment are essential for the sustainable natural resources planning & optimum utilization and reducing the risk of problems arising from unplanned urban growth like pollution, urban heat island and ecological disturbances. Cellular Automata (CA) based modelling techniques have become popular in recent past for simulating the urban growth. Present study is aimed to evaluate the performance of the CA based SLEUTH model in simulating the urban growth of a complex ...

  10. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    Science.gov (United States)

    Klumpar, D. M. (Principal Investigator)

    1981-01-01

    Efforts devoted to reading MAGSAT data tapes in preparation for further analysis of the MAGSAT data are discussed. A modeling procedure developed to compute the magnetic fields at satellite orbit due to hypothesized current distributions in the ionosphere and magnetosphere is described. This technique utilizes a linear current element representation of the large-scale space-current system. Several examples of the model field perturbations computed along hypothetical satellite orbits are shown.

  11. High-resolution mapping, modeling, and evolution of subsurface geomorphology using ground-penetrating radar techniques

    Digital Repository Service at National Institute of Oceanography (India)

    Loveson, V.J.; Gujar, A.R.

    prominent in terms of its high resolution, and non destructive and cost - effective aspects. In this article, advantages and limitations of GPR techniques are presented. The usefulness of GPR application in buried coastal geomorphological mapping...

  12. Modeling Academic Performance Evaluation Using Soft Computing Techniques: A Fuzzy Logic Approach

    OpenAIRE

    Ramjeet Singh Yadav; Vijendra Pratap Singh

    2011-01-01

    We have proposed a Fuzzy Expert System (FES) for student academic performance evaluation based on fuzzy logic techniques. A suitable fuzzy inference mechanism and associated rule has been discussed. It introduces the principles behind fuzzy logic and illustrates how these principles could be applied by educators to evaluating student academic performance. Several approaches using fuzzy logic techniques have been proposed to provide a practical method for evaluating student academic performanc...

  13. Short-Term Forecasting Models for Photovoltaic Plants: Analytical versus Soft-Computing Techniques

    OpenAIRE

    Monteiro, Claudio; Fernandez-Jimenez, L. Alfredo; Ramirez-Rosado, Ignacio J.; Muñoz-Jimenez, Andres; Lara-Santillan, Pedro M.

    2013-01-01

    We present and compare two short-term statistical forecasting models for hourly average electric power production forecasts of photovoltaic (PV) plants: the analytical PV power forecasting model (APVF) and the multiplayer perceptron PV forecasting model (MPVF). Both models use forecasts from numerical weather prediction (NWP) tools at the location of the PV plant as well as the past recorded values of PV hourly electric power production. The APVF model consists of an original modeling for adj...

  14. The use of Bayesian nonlinear regression techniques for the modelling of the retention behaviour of volatile components of Artemisia species.

    Science.gov (United States)

    Jalali-Heravi, M; Mani-Varnosfaderani, A; Taherinia, D; Mahmoodi, M M

    2012-07-01

    The main aim of this work was to assess the ability of Bayesian multivariate adaptive regression splines (BMARS) and Bayesian radial basis function (BRBF) techniques for modelling the gas chromatographic retention indices of volatile components of Artemisia species. A diverse set of molecular descriptors was calculated and used as descriptor pool for modelling the retention indices. The ability of BMARS and BRBF techniques was explored for the selection of the most relevant descriptors and proper basis functions for modelling. The results revealed that BRBF technique is more reproducible than BMARS for modelling the retention indices and can be used as a method for variable selection and modelling in quantitative structure-property relationship (QSPR) studies. It is also concluded that the Markov chain Monte Carlo (MCMC) search engine, implemented in BRBF algorithm, is a suitable method for selecting the most important features from a vast number of them. The values of correlation between the calculated retention indices and the experimental ones for the training and prediction sets (0.935 and 0.902, respectively) revealed the prediction power of the BRBF model in estimating the retention index of volatile components of Artemisia species.

  15. Laboratory model study of newly deposited dredger fills using improved multiple-vacuum preloading technique

    Directory of Open Access Journals (Sweden)

    Jingjin Liu

    2017-10-01

    Full Text Available Problems continue to be encountered concerning the traditional vacuum preloading method in field during the treatment of newly deposited dredger fills. In this paper, an improved multiple-vacuum preloading method was developed to consolidate newly dredger fills that are hydraulically placed in seawater for land reclamation in Lingang Industrial Zone of Tianjin City, China. With this multiple-vacuum preloading method, the newly deposited dredger fills could be treated effectively by adopting a novel moisture separator and a rapid improvement technique without sand cushion. A series of model tests was conducted in the laboratory for comparing the results from the multiple-vacuum preloading method and the traditional one. Ten piezometers and settlement plates were installed to measure the variations in excess pore water pressures and moisture content, and vane shear strength was measured at different positions. The testing results indicate that water discharge–time curves obtained by the traditional vacuum preloading method can be divided into three phases: rapid growth phase, slow growth phase, and steady phase. According to the process of fluid flow concentrated along tiny ripples and building of larger channels inside soils during the whole vacuum loading process, the fluctuations of pore water pressure during each loading step are divided into three phases: steady phase, rapid dissipation phase, and slow dissipation phase. An optimal loading pattern which could have a best treatment effect was proposed for calculating the water discharge and pore water pressure of soil using the improved multiple-vacuum preloading method. For the newly deposited dredger fills at Lingang Industrial Zone of Tianjin City, the best loading step was 20 kPa and the loading of 40–50 kPa produced the highest drainage consolidation. The measured moisture content and vane shear strength were discussed in terms of the effect of reinforcement, both of which indicate

  16. Monitoring and Modeling the Impact of Grazers Using Visual, Remote and Traditional Field Techniques

    Science.gov (United States)

    Roadknight, C. M.; Marshall, I. W.; Rose, R. J.

    2009-04-01

    The relationship between wild and domestic animals and the landscape they graze upon is important to soil erosion studies because they are a strong influence on vegetation cover (a key control on the rate of overland flow runoff), and also because the grazers contribute directly to sediment transport via carriage and indirectly by exposing fresh soil by trampling and burrowing/excavating. Quantifying the impacts of these effects on soil erosion and their dependence on grazing intensity, in complex semi-natural habitats has proved difficult. This is due to lack of manpower to collect sufficient data and weak standardization of data collection between observers. The advent of cheaper and more sophisticated digital camera technology and GPS tracking devices has lead to an increase in the amount of habitat monitoring information that is being collected. We report on the use of automated trail cameras to continuously capture images of grazer (sheep, rabbits, deer) activity in a variety of habitats at the Moor House nature reserve in northern England. As well as grazer activity these cameras also give valuable information on key climatic soil erosion factors such as snow, rain and wind and plant growth and thus allow the importance of a range of grazer activities and the grazing intensity to be estimated. GPS collars and more well established survey methods (erosion monitoring, dung counting and vegetation surveys) are being used to generate a detailed representation of land usage and plan camera siting. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the data processing time and increase focus on important subsets in the collected data. We also present a land usage model that estimates grazing intensity, grazer behaviours and their impact on soil coverage at sites where cameras have not been deployed, based on generalising from camera sites to other

  17. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    Science.gov (United States)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  18. Swine (Sus scrofa) as a Model of Postinfarction Mitral Regurgitation and Techniques to Accommodate Its Effects during Surgical Repair.

    Science.gov (United States)

    Sarin, Eric L; Shi, Weiwei; Duara, Rajnish; Melone, Todd A; Kalra, Kanika; Strong, Ashley; Girish, Apoorva; McIver, Bryant V; Thourani, Vinod H; Guyton, Robert A; Padala, Muralidhar

    2016-01-01

    Mitral regurgitation (MR) is a common heart-valve lesion after myocardial infarction in humans. Because it is considered a risk factor for accelerated heart failure and death, various surgical approaches and catheter-based devices to correct it are in development. Lack of a reproducible animal model of MR after myocardial infarction and reliable techniques to perform open-heart surgery in these diseased models led to the use of healthy animals to test new devices. Thus, most devices that are deemed safe in healthy animals have shown poor results in human efficacy studies, hampering progress in this area of research. Here we report our experience with a swine model of postinfarction MR, describe techniques to induce regurgitation and perform open-heart surgery in these diseased animals, and discuss our outcomes, complications, and solutions.

  19. Spherical harmonics based intrasubject 3-D kidney modeling/registration technique applied on partial information

    Science.gov (United States)

    Dillenseger, Jean-Louis; Guillaume, Hélène; Patard, Jean-Jacques

    2006-01-01

    This paper presents a 3D shape reconstruction/intra-patient rigid registration technique used to establish a Nephron-Sparing Surgery preoperative planning. The usual preoperative imaging system is the Spiral CT Urography, which provides successive 3D acquisitions of complementary information on kidney anatomy. Because the kidney is difficult to demarcate from the liver or from the spleen only limited information on its volume or surface is available. In our paper we propose a methodology allowing a global kidney spatial representation on a spherical harmonics basis. The spherical harmonics are exploited to recover the kidney 3D shape and also to perform intra-patient 3D rigid registration. An evaluation performed on synthetic data showed that this technique presented lower performance then expected for the 3D shape recovering but exhibited registration results slightly more accurate as the ICP technique with faster computation time. PMID:17073323

  20. Comparison of lung tumor motion measured using a model-based 4DCT technique and a commercial protocol.

    Science.gov (United States)

    O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A

    2017-11-11

    To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017

  1. A comparison model between density functional and wave function theories by means of the Löwdin partitioning technique.

    Science.gov (United States)

    Caballero, Marc; Moreira, Ibério de P R; Bofill, Josep Maria

    2013-05-07

    A comparison model is proposed based on the Löwdin partitioning technique to analyze the differences in the treatment of electron correlation by the wave function and density functional models. This comparison model provides a tool to understand the inherent structure of both theories and its discrepancies in terms of the subjacent mathematical structure and the necessary conditions for variationality required for the energy functional. Some numerical results on simple molecules are also reported revealing the known phenomenon of "overcorrelation" of density functional theory methods.

  2. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  3. A Dry Membrane Protection Technique to Allow Surface Acoustic Wave Biosensor Measurements of Biological Model Membrane Approaches

    Directory of Open Access Journals (Sweden)

    Marius Enachescu

    2013-09-01

    Full Text Available Model membrane approaches have attracted much attention in biomedical sciences to investigate and simulate biological processes. The application of model membrane systems for biosensor measurements is partly restricted by the fact that the integrity of membranes critically depends on the maintenance of an aqueous surrounding, while various biosensors require a preconditioning of dry sensors. This is for example true for the well-established surface acoustic wave (SAW biosensor SAM®5 blue. Here, a simple drying procedure of sensor-supported model membranes is introduced using the protective disaccharide trehalose. Highly reproducible model membranes were prepared by the Langmuir-Blodgett technique, transferred to SAW sensors and supplemented with a trehalose solution. Membrane rehydration after dry incorporation into the SAW device becomes immediately evident by phase changes. Reconstituted model membranes maintain their full functionality, as indicated by biotin/avidin binding experiments. Atomic force microscopy confirmed the morphological invariability of dried and rehydrated membranes. Approximating to more physiological recognition phenomena, the site-directed immobilization of the integrin VLA-4 into the reconstituted model membrane and subsequent VCAM-1 ligand binding with nanomolar affinity were illustrated. This simple drying procedure is a novel way to combine the model membrane generation by Langmuir-Blodgett technique with SAW biosensor measurements, which extends the applicability of SAM®5 blue in biomedical sciences.

  4. Techniques for Modeling Human Performance in Synthetic Environments: A Supplementary Review

    National Research Council Canada - National Science Library

    Ritter, Frank E; Shadbolt, Nigel R; Elliman, David; Young, Richard M; Gobet, Fernand; Baxter, Gordon D

    2003-01-01

    Selected recent developments and promising directions for improving the quality of models of human performance in synthetic environments are summarized, beginning with the potential uses and goals for behavioral models...

  5. An approach to modeling operator's cognitive behavior using artificial intelligence techniques in emergency operating event sequences

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Sur, Sang Moon; Lee, Yong Hee; Park, Young Taeck; Moon, Sang Joon

    1994-01-01

    Computer modeling of an operator's cognitive behavior is a promising approach for the purpose of human factors study and man-machine systems assessment. In this paper, the states of the art in modeling operator behavior and the current status in developing an operator's model (MINERVA - NPP) are presented. The model is constructed as a knowledge-based system of a blackboard framework and is simulated based on emergency operating procedures

  6. Advanced communication system time domain modeling techniques ASYSTD software description. Volume 2: Program support documentation

    Science.gov (United States)

    1972-01-01

    The theoretical basis for the ASYSTD program is discussed in detail. In addition, the extensive bibliography given in this document illustrates some of the extensive work accomplished in the area of time domain simulation. Additions have been in the areas of modeling and language program enhancements, orthogonal transform modeling, error analysis, general filter models, BER measurements, etc. Several models have been developed which utilize the COMSAT generated orthogonal transform algorithms.

  7. Aluminium speciation in natural waters: measurement using Donnan membrane technique and modeling using NICA-Donnan

    NARCIS (Netherlands)

    Weng, L.P.; Temminghoff, E.J.M.; Riemsdijk, van W.H.

    2002-01-01

    The study of Al speciation is of interest for the assessment of soil and water quality. For the measurement of "free" aluminum (Al3+), a recently developed Donnan membrane technique was tested by measuring Al3+ in aluminum-fluoride solutions and gibbsite suspensions. It shows that the Donnan

  8. State-of-the-art Tools and Techniques for Quantitative Modeling and Analysis of Embedded Systems

    DEFF Research Database (Denmark)

    Bozga, Marius; David, Alexandre; Hartmanns, Arnd

    2012-01-01

    This paper surveys well-established/recent tools and techniques developed for the design of rigorous embedded sys- tems. We will first survey U PPAAL and M ODEST, two tools capable of dealing with both timed and stochastic aspects. Then, we will overview the BIP framework for modular design...

  9. Development of sensors, probes and imaging techniques for pollutant monitoring in geo-environmental model tests

    NARCIS (Netherlands)

    Lynch, R.J.; Allersma, H.; Barker, H.; Bezuijen, A.; Bolton, M.D.; Cartwright, M.; Davies, M.C.R.; Depountis, N.; Esposito, G.; Garnier, J.; Almeida Garrett, J.L.L. de; Harris, C.; Kechavarzi, C.; Oung, O.; Silva, M.A.G. da; Santos, C.; Sentenac, P.; Soga, K.; Spiessl, S.; Taylor, R.N.; Treadaway, A.C.J.; Weststrate, F.

    2001-01-01

    In order to be able to track the movement of pollutant plumes during geotechnical centrifuge and other geo-en-vironmental experiments, a number of techniques have been investigated: fibre-optic photometric sensors, resistivity probes, resistivity tomography, and copper ion-selective electrodes.

  10. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    Science.gov (United States)

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  11. Modeling of radial asymmetry in lens distortion facilitated by modern optimization techniques

    CSIR Research Space (South Africa)

    De Villiers, Johan P

    2010-01-18

    Full Text Available -centering. This paper shows that the characterization of lens distortion can be improved by over 79% compared to prevailing methods. This is achieved by using modern numerical optimization techniques such as the Leapfrog algorithm, and sensitivity-normalized parameter...

  12. Observation and modeling of biological colloids with neutron scattering techniques and Monte Carlo simulations

    NARCIS (Netherlands)

    Van Heijkamp, L.F.

    2011-01-01

    In this study non-invasive neutron scattering techniques are used on soft condensed matter, probing colloidal length scales. Neutrons penetrate deeply into matter and have a different interaction with hydrogen and deuterium, allowing for tunable contrast using light and heavy water as solvents. The

  13. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    are too high to count manually, and stereology is now the technique of choice whenever estimates of three-dimensional quantities need to be extracted from measurements on two-dimensional sections. All stereological methods are in principle unbiased; however, they rely on proper knowledge about...

  14. Models and techniques for hotel revenue management using a rolling horizon.

    NARCIS (Netherlands)

    P. Goldman; R. Freling (Richard); K. Pak; N. Piersma (Nanda)

    2001-01-01

    textabstractThis paper studies decision rules for accepting reservations for stays in a hotel based on deterministic and stochastic mathematical programming techniques. Booking control strategies are constructed that include ideas for nesting, booking limits and bid prices. We allow for multiple

  15. Models and Techniques for Hotel Revenue Management Using a Roling Horizon

    NARCIS (Netherlands)

    P. Goldman; R. Freling (Richard); K. Pak; N. Piersma (Nanda)

    2001-01-01

    textabstractAbstract This paper studies decision rules for accepting reservations for stays in a hotel based on deterministic and stochastic mathematical programming techniques. Booking control strategies are constructed that include ideas for nesting, booking limits and bid prices. We allow for

  16. IBM SPSS modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  17. Ibm spss modeler essentials effective techniques for building powerful data mining and predictive analytics solutions

    CERN Document Server

    McCormick, Keith; Wei, Bowen

    2017-01-01

    IBM SPSS Modeler allows quick, efficient predictive analytics and insight building from your data, and is a popularly used data mining tool. This book will guide you through the data mining process, and presents relevant statistical methods which are used to build predictive models and conduct other analytic tasks using IBM SPSS Modeler. From ...

  18. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    Science.gov (United States)

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  19. Comparison of groundwater residence time using isotope techniques and numerical groundwater flow model in Gneissic Terrain, Korea

    International Nuclear Information System (INIS)

    Bae, D.S.; Kim, C.S.; Koh, Y.K.; Kim, K.S.; Song, M.Y.

    1997-01-01

    The prediction of groundwater flow affecting the migration of radionuclides is an important component of the performance assessment of radioactive waste disposal. Groundwater flow in fractured rock mass is controlled by fracture networks, transmissivity and hydraulic gradient. Furthermore the scale-dependent and anisotropic properties of hydraulic parameters are resulted mainly from irregular patterns of fracture system, which are very complex to evaluate properly with the current techniques available. For the purpose of characterizing a groundwater flow in fractured rock mass, the discrete fracture network (DFN) concept is available on the basis of assumptions of groundwater flowing only along fractures and flowpaths in rock mass formed by interconnected fractures. To increase the reliability of assessment in groundwater flow phenomena, numerical groundwater flow model and isotopic techniques were applied. Fracture mapping, borehole acoustic scanning were performed to identify conductive fractures in gneissic terrane. Tracer techniques, using deuterium, oxygen-18 and tritium were applied to evaluate the recharge area and groundwater residence time

  20. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  1. Consistent and Clear Reporting of Results from Diverse Modeling Techniques: The A3 Method

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    2015-08-01

    Full Text Available The measurement and reporting of model error is of basic importance when constructing models. Here, a general method and an R package, A3, are presented to support the assessment and communication of the quality of a model fit along with metrics of variable importance. The presented method is accurate, robust, and adaptable to a wide range of predictive modeling algorithms. The method is described along with case studies and a usage guide. It is shown how the method can be used to obtain more accurate models for prediction and how this may simultaneously lead to altered inferences and conclusions about the impact of potential drivers within a system.

  2. Short-Term Forecasting Models for Photovoltaic Plants: Analytical versus Soft-Computing Techniques

    Directory of Open Access Journals (Sweden)

    Claudio Monteiro

    2013-01-01

    Full Text Available We present and compare two short-term statistical forecasting models for hourly average electric power production forecasts of photovoltaic (PV plants: the analytical PV power forecasting model (APVF and the multiplayer perceptron PV forecasting model (MPVF. Both models use forecasts from numerical weather prediction (NWP tools at the location of the PV plant as well as the past recorded values of PV hourly electric power production. The APVF model consists of an original modeling for adjusting irradiation data of clear sky by an irradiation attenuation index, combined with a PV power production attenuation index. The MPVF model consists of an artificial neural network based model (selected among a large set of ANN optimized with genetic algorithms, GAs. The two models use forecasts from the same NWP tool as inputs. The APVF and MPVF models have been applied to a real-life case study of a grid-connected PV plant using the same data. Despite the fact that both models are quite different, they achieve very similar results, with forecast horizons covering all the daylight hours of the following day, which give a good perspective of their applicability for PV electric production sale bids to electricity markets.

  3. A Comparison of Reduced Order Modeling Techniques Used in Dynamic Substructuring [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Roettgen, Dan [Wisc; Seeger, Benjamin [Stuttgart; Tai, Wei Che [Washington; Baek, Seunghun [Michigan; Dossogne, Tilan [Liege; Allen, Matthew S [Wisc; Kuether, Robert J.; Brake, Matthew Robert; Mayes, Randall L.

    2016-01-01

    Experimental dynamic substructuring is a means whereby a mathematical model for a substructure can be obtained experimentally and then coupled to a model for the rest of the assembly to predict the response. Recently, various methods have been proposed that use a transmission simulator to overcome sensitivity to measurement errors and to exercise the interface between the substructures; including the Craig-Bampton, Dual Craig-Bampton, and Craig-Mayes methods. This work compares the advantages and disadvantages of these reduced order modeling strategies for two dynamic substructuring problems. The methods are first used on an analytical beam model to validate the methodologies. Then they are used to obtain an experimental model for structure consisting of a cylinder with several components inside connected to the outside case by foam with uncertain properties. This represents an exceedingly difficult structure to model and so experimental substructuring could be an attractive way to obtain a model of the system.

  4. A Comparison of Reduced Order Modeling Techniques Used in Dynamic Substructuring.

    Energy Technology Data Exchange (ETDEWEB)

    Roettgen, Dan; Seegar, Ben; Tai, Wei Che; Baek, Seunghun; Dossogne, Tilan; Allen, Matthew; Kuether, Robert J.; Brake, Matthew Robert; Mayes, Randall L.

    2015-10-01

    Experimental dynamic substructuring is a means whereby a mathematical model for a substructure can be obtained experimentally and then coupled to a model for the rest of the assembly to predict the response. Recently, various methods have been proposed that use a transmission simulator to overcome sensitivity to measurement errors and to exercise the interface between the substructures; including the Craig-Bampton, Dual Craig-Bampton, and Craig-Mayes methods. This work compares the advantages and disadvantages of these reduced order modeling strategies for two dynamic substructuring problems. The methods are first used on an analytical beam model to validate the methodologies. Then they are used to obtain an experimental model for structure consisting of a cylinder with several components inside connected to the outside case by foam with uncertain properties. This represents an exceedingly difficult structure to model and so experimental substructuring could be an attractive way to obtain a model of the system.

  5. A Study on the Development of Simplified Fuel Assembly SSE/LOCA Analysis Model using Optimization Technique

    International Nuclear Information System (INIS)

    Lee, Kyou Seok; Jeon, Sang Youn; Kim, Hyeong Koo

    2009-01-01

    Under the Safe Shutdown Earthquake (SSE) and Loss of Coolant Accident (LOCA) events, the fuel assembly deflection and impact force between fuel assemblies are obtained by the dynamic transient analysis for the reactor core model. The impact behavior between fuel assemblies shows non-linear characteristics, because fuel assembly shows non-linearly dynamic characteristics and its geometry is complicated. Furthermore, since a reactor core consists of a large number of fuel assemblies, the dynamic behavior of the core under the postulated events is very difficult to analyze. Therefore, it is necessary that fuel assembly model be simplified considering dynamic non-linear characteristics in core analysis. In this study, a simplified fuel assembly finite element model for 17 Type RFA has been developed using optimization technique. To obtain the simplified model, the optimization algorithm of ANSYS was used, and the model was verified by comparison with fuel assembly mechanical test results

  6. A Study on the Development of Simplified Fuel Assembly SSE/LOCA Analysis Model using Optimization Technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyou Seok; Jeon, Sang Youn; Kim, Hyeong Koo [Korea Nuclear Fuel, Daejeon (Korea, Republic of)

    2009-05-15

    Under the Safe Shutdown Earthquake (SSE) and Loss of Coolant Accident (LOCA) events, the fuel assembly deflection and impact force between fuel assemblies are obtained by the dynamic transient analysis for the reactor core model. The impact behavior between fuel assemblies shows non-linear characteristics, because fuel assembly shows non-linearly dynamic characteristics and its geometry is complicated. Furthermore, since a reactor core consists of a large number of fuel assemblies, the dynamic behavior of the core under the postulated events is very difficult to analyze. Therefore, it is necessary that fuel assembly model be simplified considering dynamic non-linear characteristics in core analysis. In this study, a simplified fuel assembly finite element model for 17 Type RFA has been developed using optimization technique. To obtain the simplified model, the optimization algorithm of ANSYS was used, and the model was verified by comparison with fuel assembly mechanical test results.

  7. Measurement and modeling of out-of-field doses from various advanced post-mastectomy radiotherapy techniques

    Science.gov (United States)

    Yoon, Jihyung; Heins, David; Zhao, Xiaodong; Sanders, Mary; Zhang, Rui

    2017-12-01

    More and more advanced radiotherapy techniques have been adopted for post-mastectomy radiotherapies (PMRT). Patient dose reconstruction is challenging for these advanced techniques because they increase the low out-of-field dose area while the accuracy of out-of-field dose calculations by current commercial treatment planning systems (TPSs) is poor. We aim to measure and model the out-of-field radiation doses from various advanced PMRT techniques. PMRT treatment plans for an anthropomorphic phantom were generated, including volumetric modulated arc therapy with standard and flattening-filter-free photon beams, mixed beam therapy, 4-field intensity modulated radiation therapy (IMRT), and tomotherapy. We measured doses in the phantom where the TPS calculated doses were lower than 5% of the prescription dose using thermoluminescent dosimeters (TLD). The TLD measurements were corrected by two additional energy correction factors, namely out-of-beam out-of-field (OBOF) correction factor K OBOF and in-beam out-of-field (IBOF) correction factor K IBOF, which were determined by separate measurements using an ion chamber and TLD. A simple analytical model was developed to predict out-of-field dose as a function of distance from the field edge for each PMRT technique. The root mean square discrepancies between measured and calculated out-of-field doses were within 0.66 cGy Gy-1 for all techniques. The IBOF doses were highly scattered and should be evaluated case by case. One can easily combine the measured out-of-field dose here with the in-field dose calculated by the local TPS to reconstruct organ doses for a specific PMRT patient if the same treatment apparatus and technique were used.

  8. The Development and Application of Reactive Transport Modeling Techniques to Study Radionuclide Migration at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    Hari Selvi Viswanathan

    1999-01-01

    Yucca Mountain, Nevada has been chosen as a possible site for the first high level radioactive waste repository in the United States. As part of the site investigation studies, we need to make scientifically rigorous estimations of radionuclide migration in the event of a repository breach. Performance assessment models used to make these estimations are computationally intensive. We have developed two reactive transport modeling techniques to simulate radionuclide transport at Yucca Mountain: (1) the selective coupling approach applied to the convection-dispersion-reaction (CDR) model and (2) a reactive stream tube approach (RST). These models were designed to capture the important processes that influence radionuclide migration while being computationally efficient. The conventional method of modeling reactive transport models is to solve a coupled set of multi-dimensional partial differential equations for the relevant chemical components in the system. We have developed an iterative solution technique, denoted the selective coupling method, that represents a versatile alternative to traditional uncoupled iterative techniques and the filly coupled global implicit method. We show that selective coupling results in computational and memory savings relative to these approaches. We develop RST as an alternative to the CDR method for solving large two- or three-dimensional reactive transport simulations for cases in which one is interested in predicting the flux across a specific control plane. In the RST method, the multidimensional problem is reduced to a series of one-dimensional transport simulations along streamlines. The key assumption with RST is that mixing at the control plane approximates the transverse dispersion between streamlines. We compare the CDR and RST approaches for several scenarios that are relevant to the Yucca Mountain Project. For example, we apply the CDR and RST approaches to model an ongoing field experiment called the Unsaturated Zone

  9. A model for teaching and learning spinal thrust manipulation and its effect on participant confidence in technique performance.

    Science.gov (United States)

    Wise, Christopher H; Schenk, Ronald J; Lattanzi, Jill Black

    2016-07-01

    Despite emerging evidence to support the use of high velocity thrust manipulation in the management of lumbar spinal conditions, utilization of thrust manipulation among clinicians remains relatively low. One reason for the underutilization of these procedures may be related to disparity in training in the performance of these techniques at the professional and post professional levels. To assess the effect of using a new model of active learning on participant confidence in the performance of spinal thrust manipulation and the implications for its use in the professional and post-professional training of physical therapists. A cohort of 15 DPT students in their final semester of entry-level professional training participated in an active training session emphasizing a sequential partial task practice (SPTP) strategy in which participants engaged in partial task practice over several repetitions with different partners. Participants' level of confidence in the performance of these techniques was determined through comparison of pre- and post-training session surveys and a post-session open-ended interview. The increase in scores across all items of the individual pre- and post-session surveys suggests that this model was effective in changing overall participant perception regarding the effectiveness and safety of these techniques and in increasing student confidence in their performance. Interviews revealed that participants greatly preferred the SPTP strategy, which enhanced their confidence in technique performance. Results indicate that this new model of psychomotor training may be effective at improving confidence in the performance of spinal thrust manipulation and, subsequently, may be useful for encouraging the future use of these techniques in the care of individuals with impairments of the spine. Inasmuch, this method of instruction may be useful for training of physical therapists at both the professional and post-professional levels.

  10. Validation of a mathematical model for Bell 427 Helicopter using parameter estimation techniques and flight test data

    Science.gov (United States)

    Crisan, Emil Gabriel

    Certification requirements, optimization and minimum project costs, design of flight control laws and the implementation of flight simulators are among the principal applications of system identification in the aeronautical industry. This document examines the practical application of parameter estimation techniques to the problem of estimating helicopter stability and control derivatives from flight test data provided by Bell Helicopter Textron Canada. The purpose of this work is twofold: a time-domain application of the Output Error method using the Gauss-Newton algorithm and a frequency-domain identification method to obtain the aerodynamic and control derivatives of a helicopter. The adopted model for this study is a fully coupled, 6 degree of freedom (DoF) state space model. The technique used for rotorcraft identification in time-domain was the Maximum Likelihood Estimation method, embodied in a modified version of NASA's Maximum Likelihood Estimator program (MMLE3) obtained from the National Research Council (NRC). The frequency-domain system identification procedure is incorporated in a comprehensive package of user-oriented programs referred to as CIFERRTM. The coupled, 6 DoF model does not include the high frequency main rotor modes (flapping, lead-lag, twisting), yet it is capable of modeling rotorcraft dynamics fairly accurately as resulted from the model verification. The identification results demonstrate that MMLE3 is a powerful and effective tool for extracting reliable helicopter models from flight test data. The results obtained in frequency-domain approach demonstrated that CIFERRTM could achieve good results even on limited data.

  11. Characterization of climate indices in models and observations using Hurst Exponent and Reyni Entropy Techniques

    Science.gov (United States)

    Newman, D.; Bhatt, U. S.; Wackerbauer, R.; Sanchez, R.; Polyakov, I.

    2009-12-01

    Because models are intrinsically incomplete and evolving, multiple methods are needed to characterize how well models match observations and were their weaknesses lie. For the study of climate, global climate models (GCM) are the primary tool. Therefore, in order to improve our climate modeling confidence and our understanding of the models weakness we need to apply more and more measures of various types until one finds differences. Then we can decide if these differences have important impacts on ones results and what they mean in terms of the weaknesses and missing physics in the models. In this work, we investigate a suite of National Center for Atmospheric Research (NCAR) Community Climate System Model (CCSM3) simulations of varied complexity, from fixed sea surface temperature simulations to fully coupled T85 simulations. Climate indices (e.g. NAO), constructed from the GCM simulations and observed data, are analyzed using Hurst Exponent (R/S) and Reyni Entropy methods to explore long-term and short-term dynamics (i.e. temporal evolution of the time series). These methods identify clear differences between the models and observations as well as between the models. One preliminary finding suggests that fixing midlatitude SSTs to observed values increases the differences between the model and observation dynamics at long time scales.

  12. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training

    Directory of Open Access Journals (Sweden)

    Florian Schellenberg

    2015-01-01

    Full Text Available Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.

  13. Review of Modelling Techniques for In Vivo Muscle Force Estimation in the Lower Extremities during Strength Training.

    Science.gov (United States)

    Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio

    2015-01-01

    Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.

  14. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    Science.gov (United States)

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  15. On the use of the spectroscopic techniques to model the interactions between radionuclides and solid minerals

    Energy Technology Data Exchange (ETDEWEB)

    Simoni, E. [IPN, Paris XI University, 91406 Orsay (France)]. e-mail: simoni@ipno.in2p3.fr

    2004-07-01

    In order to determine the radionuclides sorption constants on solid natural minerals, both thermodynamic and structural investigations, using spectroscopic techniques, are presented. The natural clays, that could be used as engineering barrier in the nuclear waste geological repository, are rather complex minerals. Therefore, in order to understand how these natural materials retain the radionuclides, it is necessary first to perform these studies on simple substrates such as phosphates, oxides and silicates (as powder and single crystal as well) and then extrapolate the obtained results on the natural minerals. As examples, the main results on the sorption processes of the hexavalent uranium onto zircon (ZrSiO{sub 4}) and lanthanum phosphate (LaPO{sub 4}) are presented. The corresponding sorption curves are simulated using the results obtained with the following spectroscopic techniques: laser induced spectro fluorimetry, X-ray photoelectron spectroscopy (XP S), X-ray absorption spectroscopy (Exafs). Finally, the thermodynamic sorption constants are calculated. (Author)

  16. Approximating Optimal Release in a Deterministic Model for the Sterile Insect Technique

    Directory of Open Access Journals (Sweden)

    Sergio Ramirez

    2016-01-01

    Full Text Available Cost/benefit analyses are essential to support management planning and decisions before launching any pest control program. In particular, applications of the sterile insect technique (SIT are often prevented by the projected economic burden associated with rearing processes. This has had a deep impact on the technique development and its use on insects with long larval periods, as often seen in beetles. Under the assumptions of long adult timespan and multiple mating, we show how to find approximate optimal sterile release policies that minimize costs. The theoretical framework proposed considers the release of insects by pulses and finds approximate optimal release sizes through stochastic searching. The scheme is then used to compare simulated release strategies obtained for different pulse schedules and release bounds, providing a platform for evaluating the convenience of increasing sterile male release intensity or extending the period of control.

  17. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  18. Advanced numerical techniques for modeling tensile crack propagation in gravity dams

    OpenAIRE

    Dias, I.F.; Oliver Olivella, Xavier; Lemos, J.V.; Lloberas Valls, Oriol

    2015-01-01

    Cracks propagating deep inside gravity dams can seriously affect their structural safety. Due to the potential catastrophic scenarios associated to the collapse of large concrete dams, it is a fundamental issue to realistically predict the eventual crack profiles and the ultimate structural resistance associated to the failure mechanisms. This work investigates tensile crack propagation in concrete gravity dams by using some new recently developed numerical techniques (crack-path field and...

  19. Crude Oil Model Emulsion Characterised by means of Near Infrared Spectroscopy and Multivariate Techniques

    DEFF Research Database (Denmark)

    Kallevik, H.; Hansen, Susanne Brunsgaard; Sæther, Ø.

    2000-01-01

    Water-in-oil emulsions are investigated by means of multivariate analysis of near infrared (NIR) spectroscopic profiles in the range 1100 - 2250 nm. The oil phase is a paraffin-diluted crude oil from the Norwegian Continental Shelf. The influence of water absorption and light scattering...... of the water droplets are shown to be strong. Despite the strong influence of the water phase, the NIR technique is still capable of predicting the composition of the investigated oil phase....

  20. Modeling techniques for predicting long-term consequences of the effects of radiation on natural aquatic populations and ecosystems

    International Nuclear Information System (INIS)

    Van Winkle, W.

    1977-01-01

    Appropriate modeling techniques already exist for investigating some long-term consequences of the effects of radiation on natural aquatic populations and ecosystems, even if to date these techniques have not been used for this purpose. At the low levels of irradiation estimated to occur in natural aquatic systems, effects are difficult to detect at even the individual level much less the population or ecosystem level where the subtle effects of radiation are likely to be completely overshadowed by the effects of other environmental factors and stresses and the natural variability of the system. The claim that population and ecosystem models can be accurate and reliable predictive tools in assessing any stress has been oversold. Nonetheless, the use of these tools can be useful for learning more about the effects of radioactive releases on aquatic populations and ecosystems