WorldWideScience

Sample records for model methods forty

  1. Forty years of 90Sr in situ migration: importance of soil characterization in modeling transport phenomena.

    Science.gov (United States)

    Fernandez, J M; Piault, E; Macouillard, D; Juncos, C

    2006-01-01

    In 1960 experiments were carried out on the transfer of (90)Sr between soil, grapes and wine. The experiments were conducted in situ on a piece of land limited by two control strips. The (90)Sr migration over the last 40 years was studied by performing radiological and physico-chemical characterizations of the soil on eight 70 cm deep cores. The vertical migration modeling of (90)Sr required the definition of a triple layer conceptual model integrating the rainwater infiltration at constant flux as the only external factor of influence. Afterwards the importance of a detailed soil characterization for modeling was discussed and satisfactory simulation of the (90)Sr vertical transport was obtained and showed a calculated migration rate of about 1.0 cm year(-1) in full agreement with the in situ measured values. The discussion was regarding some of the key parameters such as granulometry, organic matter content (in the Van Genuchten parameter determination), Kd and the efficient rainwater infiltration. Besides the experimental data, simplifying assumptions in modeling such as water-soil redistribution calculation and factual discontinuities in conceptual model were examined.

  2. Forty years of {sup 9}Sr in situ migration: importance of soil characterization in modeling transport phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, J.M. [CEA-Cadarache, DTN/SMTM/LMTE, BP 1, 13108 Saint Paul Lez Durance (France)]. E-mail: jean-michel.fernandez@noumea.ird.nc; Piault, E. [CEA-Cadarache, DTN/SMTM/LMTE, BP 1, 13108 Saint Paul Lez Durance (France); Macouillard, D. [ENSIL, 16 rue d' Atlantis, Technopole BP 6804, 87068 Limoges (France); Juncos, C. [Universite de Savoie, BP 1104, 73011 Chambery (France)

    2006-07-01

    In 1960 experiments were carried out on the transfer of {sup 9}Sr between soil, grapes and wine. The experiments were conducted in situ on a piece of land limited by two control strips. The {sup 9}Sr migration over the last 40 years was studied by performing radiological and physico-chemical characterizations of the soil on eight 70 cm deep cores. The vertical migration modeling of {sup 9}Sr required the definition of a triple layer conceptual model integrating the rainwater infiltration at constant flux as the only external factor of influence. Afterwards the importance of a detailed soil characterization for modeling was discussed and satisfactory simulation of the {sup 9}Sr vertical transport was obtained and showed a calculated migration rate of about 1.0 cm year{sup -1} in full agreement with the in situ measured values. The discussion was regarding some of the key parameters such as granulometry, organic matter content (in the Van Genuchten parameter determination), Kd and the efficient rainwater infiltration. Besides the experimental data, simplifying assumptions in modeling such as water-soil redistribution calculation and factual discontinuities in conceptual model were examined.

  3. Energy Return on Investment (EROI) for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production.

    Science.gov (United States)

    Brandt, Adam R; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah

    2015-01-01

    Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services.

  4. Energy Return on Investment (EROI for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production.

    Directory of Open Access Journals (Sweden)

    Adam R Brandt

    Full Text Available Studies of the energy return on investment (EROI for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs, nor does it include other indirect energy uses such as labor or services.

  5. Forty Thousand Years of Advertisement

    Directory of Open Access Journals (Sweden)

    Konstantin Lidin

    2006-05-01

    Full Text Available The roots of advertisement are connected with reclamations, claims and arguments. No surprise that many people treat it with distrust, suspicion and irritation.Nobody loves advertisement (except its authors and those who order it, nobody watches it, everybody despises it and get annoyed because of it. But newspapers, magazines, television and city economy in general cannot do without it. One keeps on arguing whether to prohibit advertisement, to restrict its expansion, to bring in stricter regulations on advertisement…If something attracts attention, intrigues, promises to make dreams true and arouses desire to join - it should be considered as advertisement. This definition allows saying with no doubts: yes, advertisement did existed in the most ancient strongest cultures. Advertisement is as old as the humane civilization. There have always been the objects to be advertised, and different methods appeared to reach those goals.Advertisement techniques and topics appear, get forgotten and appear again in other places and other times. Sometimes the author of advertisement image has no idea about his forerunners and believes he is the discoverer. A skillful designer with high level of professionalism deliberately uses images from past centuries. The professional is easily guided by historical prototypes.But there is another type of advertisement, its prototypes cannot be found in museums. It does not suppose any respect, because it is built on scornful attitude towards the spectator.However, basically the advertisement is made by professional designers, and in this case ignorance is inadmissible. Even if we many times appeal to Irkutsk designers to work on raising their cultural level of advertisements, anyhow, orders will be always made by those who pay. Unless Its Majesty Ruble stands for Culture, those appeals are of no use.

  6. Getting started with FortiGate

    CERN Document Server

    Fabbri, Rosato

    2013-01-01

    This book is a step-by-step tutorial that will teach you everything you need to know about the deployment and management of FortiGate, including high availability, complex routing, various kinds of VPN working, user authentication, security rules and controls on applications, and mail and Internet access.This book is intended for network administrators, security managers, and IT pros. It is a great starting point if you have to administer or configure a FortiGate unit, especially if you have no previous experience. For people that have never managed a FortiGate unit, the book helpfully walks t

  7. Forty Defective Criticisms of Full Reserve Banking

    OpenAIRE

    Musgrave, Ralph S.

    2016-01-01

    Abstract. The basics of full reserve banking (FR) are set out below, followed by forty defective criticisms of FR. Each of those forty sections has: 1. A heading. 2. Where the heading does not adequately capture the nature of the criticism, there is a paragraph below the heading starting “I.e…”, which expands on the heading. 3. There are references to one or more economists who have put the relevant criticism. 4. The answer to each criticism which starts with a paragraph beginning with the wo...

  8. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  9. The first forty years, 1947-1987

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, M.S. (ed.); Cohen, A.; Petersen, B.

    1987-01-01

    This report commemorates the fortieth anniversary of Brookhaven National Laboratory by representing a historical overview of research at the facility. The chapters of the report are entitled: The First Forty Years, Brookhaven: A National Resource, Fulfilling a Mission - Brookhaven's Mighty Machines, Marketing the Milestones in Basic Research, Meeting National Needs, Making a Difference in Everyday Life, and Looking Forward.

  10. The complete mitochondrial genome of Microtus fortis calamorum (Arvicolinae, Rodentia) and its phylogenetic analysis.

    Science.gov (United States)

    Jiang, Xianhuan; Gao, Jun; Ni, Liju; Hu, Jianhua; Li, Kai; Sun, Fengping; Xie, Jianyun; Bo, Xiong; Gao, Chen; Xiao, Junhua; Zhou, Yuxun

    2012-05-01

    Microtus fortis is a special resource of rodent in China. It is a promising experimental animal model for the study on the mechanism of Schistosome japonicum resistance. The first complete mitochondrial genome sequence for Microtus fortis calamorum, a subspecies of M. fortis (Arvicolinae, Rodentia), was reported in this study. The mitochondrial genome sequence of M. f. calamorum (Genbank: JF261175) showed a typical vertebrate pattern with 13 protein coding genes, 2 ribosomal RNAs, 22 transfer RNAs and one major noncoding region (CR region).The extended termination associated sequences (ETAS-1 and ETAS-2) and conserved sequence block 1 (CSB-1) were found in the CR region. The putative origin of replication for the light strand (O(L)) of M. f. calamorum was 35bp long and showed high conservation in stem and adjacent sequences, but the difference existed in the loop region among three species of genus Microtus. In order to investigate the phylogenetic position of M. f. calamorum, the phylogenetic trees (Maximum likelihood and Bayesian methods) were constructed based on 12 protein-coding genes (except for ND6 gene) on H strand from 16 rodent species. M. f. calamorum was classified into genus Microtus, Arvcicolinae for the highly phylogenetic relationship with Microtus kikuchii (Taiwan vole). Further phylogenetic analysis results based on the cytochrome b gene ranged from M. f. calamorum to one of the subspecies of M. fortis, which formed a sister group of Microtus middendorfii in the genus Microtus.

  11. Synthesis of the elements in stars: forty years of progress

    Energy Technology Data Exchange (ETDEWEB)

    Wallerstein, G. [Department of Astronomy, University of Washington, Seattle, Washington 98195 (United States); Iben, I. Jr. [University of Illinois, 1002 West Green Street, Urbana, Illinois 61801 (United States); Parker, P. [Yale University, New Haven, Connecticut 06520-8124 (United States); Boesgaard, A.M. [Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, Hawaii 96822 (United States); Hale, G.M. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Champagne, A.E. [University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27594 (United States)]|[Triangle Universities Nuclear Laboratory, Duke University, Durham, North Carolina 27706 (United States); Barnes, C.A. [California Institute of Technology, Pasadena, California 91125 (United States); Kaeppeler, F. [Forschungzentrum, Karlsruhe, D-76021 (Germany); Smith, V.V. [University of Texas at El Paso, El Paso, Texas 79968-0515 (United States); Hoffman, R.D. [Steward Observatory, University of Arizona, Tucson, Arizona 85721 (United States); Timmes, F.X. [University of California at Santa Cruz, California 95064 (United States); Sneden, C. [University of Texas, Austin, Texas 78712 (United States); Boyd, R.N. [Ohio State University, Columbus, Ohio 43210 (United States); Meyer, B.S. [Clemson University, Clemson, South Carolina 29630 (United States); Lambert, D.L. [University of Texas, Austin, Texas 78712 (United States)

    1997-10-01

    Forty years ago Burbidge, Burbidge, Fowler, and Hoyle combined what we would now call fragmentary evidence from nuclear physics, stellar evolution and the abundances of elements and isotopes in the solar system as well as a few stars into a synthesis of remarkable ingenuity. Their review provided a foundation for forty years of research in all of the aspects of low energy nuclear experiments and theory, stellar modeling over a wide range of mass and composition, and abundance studies of many hundreds of stars, many of which have shown distinct evidence of the processes suggested by B{sup 2}FH. In this review we summarize progress in each of these fields with emphasis on the most recent developments. {copyright} {ital 1997} {ital The American Physical Society}

  12. The First Forty Years of a Technosol

    Institute of Scientific and Technical Information of China (English)

    R. SCALENGHE; S. FERRARIS

    2009-01-01

    Soil formation is often a very slow process that requires thousands and even millions of years. Human influence, occasionally on a par with the function of climate or geological forces, can accelerate the process and can be viewed as a distinct soil forming factor. This paper describes a soil, Haplic Regosol, in which anthrosolization dominates the soil forming process. Man-made soils, Technosols, were stabilized with techniques of ecological engineering (crib walls). We measured the main soil properties and focused on the movement of water (the reduction of soil weight is the key factor in stabilizing these caleschists). The newly deposited debris, sheltered by anthropic interventions, after four years differentiated an A/C profile while after forty years differentiated an O/A/AB/Bw/BC/C profile. Our results indicate that colonization by plants and the consequent success of vegetation establishment is influenced mainly by control of the factor of pedogenesis 'topography' and by the ability of these Technosois to retain nutrients.

  13. SKIN AND HAIR CHANGES AFTER FORTY

    Directory of Open Access Journals (Sweden)

    Manisha

    2014-04-01

    Full Text Available Aging is a continuous, dynamic, and an irreversible process. Direct exposure to ultra-violet radiations, skin is particularly prone to early aging, known as photo aging. Skin aging is particularly important because of its visibility and social impact. As women age we will notice changes to our skin and hair during the menopause. Dry, thinning, fragile, less tolerant and sagging skin are common complaints. The main reasons for the change in skin is the loss of estrogen, testosterone and dehydroepiandrosterone (DHEA etc, 1, 2, 3 from the age of 35 onwards up to menopause, the more we have had long-term exposure to the elements, such as sun and wind the more this becomes evident. Estrogen is very involved in the normal function of the skin. It directly affects the function of key cells in the skin, like the fibroblast (produces collagen and elastin, keratinocyte (closely involved in skin protection and melanocytes (involved in evenness of skin color, etc.. It also helps regulate hair follicle function (hair production as well as sebaceous gland activity (producing skin oils. After the age of forty most of women enters menopause, during which estrogens levels decreases, which leads to different types of hair and skin changes which has been described in this article.

  14. Forty years of olfactory navigation in birds.

    Science.gov (United States)

    Gagliardo, Anna

    2013-06-15

    Forty years ago, Papi and colleagues discovered that anosmic pigeons cannot find their way home when released at unfamiliar locations. They explained this phenomenon by developing the olfactory navigation hypothesis: pigeons at the home loft learn the odours carried by the winds in association with wind direction; once at the release site, they determine the direction of displacement on the basis of the odours perceived locally and orient homeward. In addition to the old classical experiments, new GPS tracking data and observations on the activation of the olfactory system in displaced pigeons have provided further evidence for the specific role of olfactory cues in pigeon navigation. Although it is not known which odours the birds might rely on for navigation, it has been shown that volatile organic compounds in the atmosphere are distributed as fairly stable gradients to allow environmental odour-based navigation. The investigation of the potential role of olfactory cues for navigation in wild birds is still at an early stage; however, the evidence collected so far suggests that olfactory navigation might be a widespread mechanism in avian species.

  15. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  16. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  17. History, Archaeology and the Bible Forty Years after "Historicity"

    DEFF Research Database (Denmark)

    2016-01-01

    In History, Archaeology and the Bible Forty Years after “Historicity”, Hjelm and Thompson argue that a ‘crisis’ broke in the 1970s, when several new studies of biblical history and archaeology were published, questioning the historical-critical method of biblical scholarship. The crisis formed...... articles from some of the field’s best scholars with comprehensive discussion of historical, archaeological, anthropological, cultural and literary approaches to the Hebrew Bible and Palestine’s history. The essays question: “How does biblical history relate to the archaeological history of Israel...

  18. Channels of synthesis forty years on: integrated analysis of spatial economic systems

    Science.gov (United States)

    Hewings, Geoffrey J. D.; Nazara, Suahasil; Dridi, Chokri

    . Isard's vision of integrated modeling that was laid out in the 1960s book Methods of Regional Science provided a road map for the development of more sophisticated analysis of spatial economic systems. Some forty years later, we look back at this vision and trace developments in a sample of three areas - demographic-econometric integrated modeling, spatial interaction modeling, and environmental-economic modeling. Attention will be focused on methodological advances and their motivation by new developments in theory as well as innovations in the applications of these models to address new policy challenges. Underlying the discussion will be an evaluation of the way in which spatial issues have been addressed, ranging from concerns with regionalization to issues of spillovers and spatial correlation.

  19. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  20. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  1. Forty years of a psychiatric day hospital

    Directory of Open Access Journals (Sweden)

    Rosário Curral

    2014-03-01

    Full Text Available INTRODUCTION: Day hospitals in psychiatry are a major alternative to inpatient care today, acting as key components of community and social psychiatry. Objective: To study trends in the use of psychiatric day hospitals over the last decades of the 20th century and the first decade of the 21st century, focusing on patient age, sex, and diagnostic group, using data from Centro Hospitalar São João, Porto, Portugal. METHODS: Data corresponding to years 1970 to 2009 were collected from patient files. Patients were classified into seven diagnostic groups considering their primary diagnoses only. RESULTS: Mean age upon admission rose from 32.7±12.1 years in the second half of the 1970s to 43.5±12.2 years in 2005-2009 (p for trend < 0.001. Most patients were female (63.2%, however their proportion decreased from nearly 70% in the 1970s to 60% in the first decade of the 21st century. In males, until the late 1980s, neurotic disorders (E were the most common diagnosis, accounting for more than one third of admissions. In the subsequent years, this proportion decreased, and the number of admissions for schizophrenia (C exceeded 50% in 2004- 2009. In females, until the late 1980s, affective disorders (D and neurotic disorders (E, similarly distributed, accounted for most admissions. From the 1990s on, the proportion of neurotic disorders (E substantially decreased, and affective disorders (D came to represent more than 50% of all admissions. CONCLUSIONS: Mean age upon admission rose with time, as did the percentage of female admissions, even though the latter tendency weakened in the last 10 years assessed. There was also an increase in the proportion of patients with schizophrenia.

  2. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  3. PROCEEDINGS OF THE FORTY-THIRD ANNUAL MEETING

    National Research Council Canada - National Science Library

    1949-01-01

    The Entomological Society of America held its forty-third annual meeting Monday through Thursday, December 13-16, 1948, in conjunction with the annual meeting of the American Association of Economic Entomologists...

  4. The biology of memory: a forty-year perspective.

    Science.gov (United States)

    Kandel, Eric R

    2009-10-14

    In the forty years since the Society for Neuroscience was founded, our understanding of the biology of memory has progressed dramatically. From a historical perspective, one can discern four distinct periods of growth in neurobiological research during that time. Here I use that chronology to chart a personalized and selective course through forty years of extraordinary advances in our understanding of the biology of memory storage.

  5. [A woman in her forties with cancer, syncope and spasms].

    Science.gov (United States)

    Warsame, Mahad Omar; Gamboa, Danil; Nielsen, Erik Waage

    2014-10-14

    A female in her forties with advanced incurable rectal cancer presented to our emergency department after loss of consciousness followed by brief myoclonic jerks in her legs. A cerebral MRI was normal. Her electrocardiogram showed a prolonged QTc interval of 596 milliseconds and hypokalemia was present. She had no family history of congenital long QT syndrome or of cardiovascular disease. She was not on any medication apart from having ingested 100 g caesium carbonate over the previous 11 days as an alternative cancer treatment. Caesium chloride is postulated to increase pH and thereby induce apoptosis in cancer cells. In treatment doses caesium competes with potassium for membrane transport proteins in the cardiac cell membrane and in the reabsorption tubuli of the kidneys. A result is hypokalemia shortly after depolarization during the cardiomyocytes' repolarisation phase or delayed post-depolarisation. Torsade de pointes ventricular arrhythmias, ventricular tachycardia, pump failure and death can follow. A few case reports of adverse effects from caesium ingestion have been published, as well as reports on how caesium is used in animal models to induce ventricular tachycardia, but the hazards of caesium ingestion and its long half-life are not well known in the medical care profession or among patients. As this patient's QTc interval normalised slowly to 413 milliseconds 60 days after stopping caesium ingestion, we consider caesium intoxication and convulsive syncope from a self-terminating ventricular tachycardia as the most probable aetiology. The main message from this case is that alternative medicine can have life-threatening side effects.

  6. Research on BOM based composable modeling method

    NARCIS (Netherlands)

    Zhang, M.; He, Q.; Gong, J.

    2013-01-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling m

  7. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  8. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  9. An Effective ARMA Modeling Method.

    Science.gov (United States)

    1981-04-01

    directed towards spectral analysis, power spectral density function the preponderance of effort has been directed to- wards two special cases of the...domain spectral density function than from pole model, and, the general ARMA model is seen to its equivalent time domain autocorrelation sequence. be...n-k) (11) prediction. Assuming this prediction behavior, it k-I then follows from relationship (11) that the spectral density function of the time

  10. Multiscale modeling methods in biomechanics.

    Science.gov (United States)

    Bhattacharya, Pinaki; Viceconti, Marco

    2017-01-19

    More and more frequently, computational biomechanics deals with problems where the portion of physical reality to be modeled spans over such a large range of spatial and temporal dimensions, that it is impossible to represent it as a single space-time continuum. We are forced to consider multiple space-time continua, each representing the phenomenon of interest at a characteristic space-time scale. Multiscale models describe a complex process across multiple scales, and account for how quantities transform as we move from one scale to another. This review offers a set of definitions for this emerging field, and provides a brief summary of the most recent developments on multiscale modeling in biomechanics. Of all possible perspectives, we chose that of the modeling intent, which vastly affect the nature and the structure of each research activity. To the purpose we organized all papers reviewed in three categories: 'causal confirmation,' where multiscale models are used as materializations of the causation theories; 'predictive accuracy,' where multiscale modeling is aimed to improve the predictive accuracy; and 'determination of effect,' where multiscale modeling is used to model how a change at one scale manifests in an effect at another radically different space-time scale. Consistent with how the volume of computational biomechanics research is distributed across application targets, we extensively reviewed papers targeting the musculoskeletal and the cardiovascular systems, and covered only a few exemplary papers targeting other organ systems. The review shows a research subdomain still in its infancy, where causal confirmation papers remain the most common. For further resources related to this article, please visit the WIREs website.

  11. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  12. Laguerre polynomials method in the valon model

    CERN Document Server

    Boroun, G R

    2014-01-01

    We used the Laguerre polynomials method for determination of the proton structure function in the valon model. We have examined the applicability of the valon model with respect to a very elegant method, where the structure of the proton is determined by expanding valon distributions and valon structure functions on Laguerre polynomials. We compared our results with the experimental data, GJR parameterization and DL model. Having checked, this method gives a good description for the proton structure function in valon model.

  13. Construction Method of Supernetwork Evolution Model

    Institute of Scientific and Technical Information of China (English)

    LIU; Qiang; FANG; Jin-qing; LI; Yong

    2013-01-01

    Real networks often have small-world and scale-free characteristics.Based on BA and WS model,we proposed the following construction method for TLSEM(Fig.1).Three layers are BA model(TBA),three layers are SW model(TSW),the first and third layers are BA model,the middle layer is SW model(BA-SW),the first and third layers are SW model,and the middle layer is BA model(SW-BA).The

  14. Methodically Modeling the Tor Network

    Science.gov (United States)

    2012-08-01

    iPlane [7] and CAIDA [3]. Third, determining a better client model would further increase confidence in experimental results. Producing a more robust...Bandwidth Speed Test. http://speedtest.net/. [3] CAIDA Data. http://www.caida.org/data. [4] DETER Testbed. http://www.isi.edu/deter. [5] Emulab

  15. A matter of meaning: reflections on forty years of JCL.

    Science.gov (United States)

    Nelson, Katherine

    2014-07-01

    The entry into language via first words and, the acquisition of word meanings is considered from the perspective of publications in the Journal of Child Language over the past forty years. Problems in achieving word meanings include the disparate and sparse concepts available to the child from past prelanguage experience. Variability in beginning word learning and in its progress along a number of dimensions suggests the problems that children may encounter, as well as the strategies and styles they adopt to make progress. Social context and adult practices are vitally involved in the success of this process. Whereas much headway has been made over the past decades, much remains to be revealed through dynamic systems theory and developmental semiotic analyses, as well as laboratory research aimed at social context conditions.

  16. Forty years of training program in the JAERI

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    This report is to compile the past training program of researchers, engineers and regulatory members at the NuTEC (Nuclear Technology and Education Center) of Japan Atomic Energy Research Institute and the past basic seminars for the public, in addition to advice and perspective on the future program from relevant experts, in commemoration of the forty years of the NuTEC. It covers the past five years of educational courses and seminars in utilization of radioisotopes and nuclear energy for domestic and for international training provided at Tokyo and Tokai Education Centers and covers the activity of the Asia-Pacific nuclear technology transfer, including the activity of various committees and meetings. Especially, fifty six experts and authorities have contributed to the report with positive advice and perspective on the training program in the 21st century based on their reminiscences. (author)

  17. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  18. Developing a TQM quality management method model

    OpenAIRE

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This model describes the primary quality management methods which may be used to assess an organization's present strengths and weaknesses with regard to its use of quality management methods. This model ...

  19. Economic modeling using artificial intelligence methods

    CERN Document Server

    Marwala, Tshilidzi

    2013-01-01

    This book examines the application of artificial intelligence methods to model economic data. It addresses causality and proposes new frameworks for dealing with this issue. It also applies evolutionary computing to model evolving economic environments.

  20. Variational Methods for Biomolecular Modeling

    CERN Document Server

    Wei, Guo-Wei

    2016-01-01

    Structure, function and dynamics of many biomolecular systems can be characterized by the energetic variational principle and the corresponding systems of partial differential equations (PDEs). This principle allows us to focus on the identification of essential energetic components, the optimal parametrization of energies, and the efficient computational implementation of energy variation or minimization. Given the fact that complex biomolecular systems are structurally non-uniform and their interactions occur through contact interfaces, their free energies are associated with various interfaces as well, such as solute-solvent interface, molecular binding interface, lipid domain interface, and membrane surfaces. This fact motivates the inclusion of interface geometry, particular its curvatures, to the parametrization of free energies. Applications of such interface geometry based energetic variational principles are illustrated through three concrete topics: the multiscale modeling of biomolecular electrosta...

  1. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  2. Systems Evaluation Methods, Models, and Applications

    CERN Document Server

    Liu, Siefeng; Xie, Naiming; Yuan, Chaoqing

    2011-01-01

    A book in the Systems Evaluation, Prediction, and Decision-Making Series, Systems Evaluation: Methods, Models, and Applications covers the evolutionary course of systems evaluation methods, clearly and concisely. Outlining a wide range of methods and models, it begins by examining the method of qualitative assessment. Next, it describes the process and methods for building an index system of evaluation and considers the compared evaluation and the logical framework approach, analytic hierarchy process (AHP), and the data envelopment analysis (DEA) relative efficiency evaluation method. Unique

  3. Divorce and death: forty years of the Charleston Heart Study.

    Science.gov (United States)

    Sbarra, David A; Nietert, Paul J

    2009-01-01

    Forty years of follow-up data from the Charleston Heart Study (CHS) were used to examine the risk for early mortality associated with marital separation or divorce in a sample of more than 1,300 adults assessed on several occasions between 1960 and 2000. Participants who were separated or divorced at the start of the study evidenced significantly elevated rates of early mortality, and these results held after adjusting for baseline health status and other demographic variables. Being separated or divorced throughout the CHS follow-up window was one of the strongest predictors of early mortality. However, the excess mortality risk associated with separation or divorce was completely eliminated when participants who had ever experienced a marital separation or divorce during the study were compared with all other participants. These findings suggest that a key predictor of early death is the amount of time people live as separated or divorced. It is possible that the mortality risk conferred by marital dissolution is due to dimensions of personality that predict divorce as well as a decreased likelihood of future remarriage.

  4. [Infertility over forty: Pros and cons of IVF].

    Science.gov (United States)

    Belaisch-Allart, J; Maget, V; Mayenga, J-M; Grefenstette, I; Chouraqui, A; Belaid, Y; Kulski, O

    2015-09-01

    The population attempting pregnancy and having babies is ageing. The declining fertility potential and the late age of motherhood are increasing significantly the number of patients over forty consulting infertility specialists. Assisted reproductive technologies (ART) cannot compensate the natural decline in fertility with age. In France, in public hospital, ART is free of charge for women until 43 years, over 43, social insurance does not reimburse ART. Hence, 43 years is the usual limit, but between 40 and 42 is ART useful? The answer varies according to physicians, couples or society. On medical level, the etiology of the infertility must be taken into account. If there is an explanation to infertility (male or tubal infertility) ART is better than abstention. If the infertility is only due to age the question is raised. In France, the reimbursement by the society of a technique with very low results is discussed. However efficacy is not absolutely compulsory in Medicine. On the opposite to give false hopes may be discussed too. To obtain a reasonable consensus is rather difficult.

  5. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  6. Review of Forty Years of Technological Changes in Geomatics toward the Big Data Paradigm

    Directory of Open Access Journals (Sweden)

    Robert Jeansoulin

    2016-08-01

    Full Text Available Looking back at the last four decades, the technologies that have been developed for Earth observation and mapping can shed a light on the technologies that are trending today and on their challenges. Forty years ago, the first digital pictures decided the fate of remote sensing, photogrammetric engineering, GIS, or, for short: of geomatics. This sudden wave of volumes of data triggered the research in fields that Big Data is plowing today: this paper will examine this transition. First, a rapid survey of the technology through the succession of selected terms, will help identify two main periods in the last four decades. Spatial information appears in 1970 with the preparation of Landsat, and Big Data appears in 2010. The method for exploring geomatics’ contribution to Big Data, is to examine each of the “Vs” that are used today to characterize the latter: volume, velocity, variety, visualization, value, veracity, validity, and variability. Geomatics has been confronted to each of these facets during the period. The discussion compares the answers offered early by geomatics, with the situation in Big Data today. Over a very large range of issues, from signal processing to the semantics of information, geomatics has made contributions to many data models and algorithms. Big Data now enables geographic information to be disseminated much more widely, and to benefit from new information sources, expanding through the Internet of Things towards a future Digital Earth. Some of the lessons learned during the four decades of geomatics can also be lessons for Big Data today, and for the future of geomatics.

  7. Taekwondo training improves balance in volunteers over forty.

    Directory of Open Access Journals (Sweden)

    Gaby ePons Van Dijk

    2013-03-01

    Full Text Available AbstractBalance deteriorates with age, and may eventually lead to falling accidents which may threaten independent living. As Taekwondo contains various highly dynamic movement patterns, taekwondo practice may sustain or improve balance. Therefore, in 24 middle-aged healthy volunteers (40-71 year we investigated effects of age-adapted taekwondo training of one hour a week during one year on various balance parameters, such as: motor orientation ability (primary outcome measure, postural and static balance test, single leg stance, one leg hop test, and a questionnaire.Motor orientation ability significantly increased in favor of the antero-posterior direction with a difference of 0.62 degrees towards anterior compared to pre-training measurement, when participants corrected the tilted platform rather towards the posterior direction; female gender being an independent outcome predictor. On postural balance measurements sway path improved in all 19 participants, with a median of 9.3 mm/sec (range 0.71-45.86, and sway area in 15 participants with 4.2 mm²/sec (range 17.39-1.22. Static balance improved with an average of 5.34 seconds for the right leg, and with almost 4 seconds for the left. Median single leg stance duration increased in 17 participants with 5 seconds (range 1-16, and in 13 participants with 8 seconds (range 1-18. The average one leg hop test distance increased (not statistically significant with 9.5 cm. The questionnaire reported a better ‘ability to maintain balance’ in sixteen.In conclusion, our data suggest that age-adapted Taekwondo training improves various aspects of balance control in healthy people over the age of forty.

  8. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  9. eshless Method for Acoustic and Elastic Modeling

    Institute of Scientific and Technical Information of China (English)

    JiaXiaofeng; HuTianyue; WangRunqiu

    2005-01-01

    Wave equation method is one of the fundamental techniques for seismic modeling and imaging. In this paper the element-free-method (EFM) was used to solve acoustic and elastic equations.The key point of this method is no need of elements, which makes nodes free from the elemental restraint. Besides, the moving-least-squares (MLS) criterion in EFM leads to a high accuracy and smooth derivatives. The theories of EFM for both acoustic and elastic wave equations as well as absorbing boundary conditions were discussed respectively. Furthermore, some pre-stack models were used to show the good performance of EFM in seismic modeling.

  10. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter;

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  11. Level Crossing Methods in Stochastic Models

    CERN Document Server

    Brill, Percy H

    2008-01-01

    Since its inception in 1974, the level crossing approach for analyzing a large class of stochastic models has become increasingly popular among researchers. This volume traces the evolution of level crossing theory for obtaining probability distributions of state variables and demonstrates solution methods in a variety of stochastic models including: queues, inventories, dams, renewal models, counter models, pharmacokinetics, and the natural sciences. Results for both steady-state and transient distributions are given, and numerous examples help the reader apply the method to solve problems fa

  12. History, Arcaeology and the Bible Forty Years after "Historicity". changing Perspectives 6

    DEFF Research Database (Denmark)

    In History, Archaeology and the Bible Forty Years after “Historicity”, Hjelm and Thompson argue that a ‘crisis’ broke in the 1970s, when several new studies of biblical history and archaeology were published, questioning the historical-critical method of biblical scholarship. The crisis formed...... articles from some of the field’s best scholars with comprehensive discussion of historical, archaeological, anthropological, cultural and literary approaches to the Hebrew Bible and Palestine’s history. The essays question: “How does biblical history relate to the archaeological history of Israel...

  13. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  14. Modeling complex work systems - method meets reality

    OpenAIRE

    Veer, van der, C.G.; Hoeve, Machteld; Lenting, Bert F.

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the design of complex systems, has been applied in a situation of redesign of a Dutch public administration system. The most feasible method to collect information in this case was ethnography, the resulti...

  15. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define...... a taxonomy of aspects around conservation, constraints and constitutive relations. Aspects of the ICAS-MoT toolbox are given to illustrate the functionality of a computer aided modelling tool, which incorporates an interface to MS Excel....

  16. 77 FR 54930 - Carlyle Plastics and Resins, Formerly Known as Fortis Plastics, A Subsidiary of Plastics...

    Science.gov (United States)

    2012-09-06

    ... Employment and Training Administration Carlyle Plastics and Resins, Formerly Known as Fortis Plastics, A... plastic parts. New information shows that Fortis Plastics is now called Carlyle Plastics and Resins. In... of Carlyle Plastics and Resins, formerly known as Fortis Plastics, a subsidiary of...

  17. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  18. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  19. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    Veer, van der Gerrit C.; Hoeve, Machteld; Lenting, Bert F.

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the desi

  20. Serempathy: A New Approach To Innovation. An Application To Forty-Six Regions Of Atlantic Arc Countries

    Directory of Open Access Journals (Sweden)

    Pablo COTO-­‐MILLÁN

    2011-10-01

    Full Text Available This research provides a new theoretical approach to innovation calledSerempathy: Serendipity(which is achieved by chance+ Empathy(puttingyour self in the other. Serempathy relies on collaborative relationships between: University, private companies and publicadministration. In this theoretical approach adds chance to scientificdiscovery and an environment of empathy.Ideas aren’t self-­‐containedthings; they’re more like ecosystems and networks. The work also provides data processed in recentyears (2004-­‐2006 for forty six Atlantic Arc Regions(the forty regions of countries: United Kindong,France, Portugal and Spain, overall and in different clusters, providing relevant empirical evidence on the relationship betweenHuman Capital,Technological Platform,Innovation,Serempathy and Output.In the econometric and statistical modeling is considered especially for forty regions of the Atlantic Arc.

  1. A survey of real face modeling methods

    Science.gov (United States)

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  2. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  3. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...

  4. Methods of modelling relative growth rate

    Institute of Scientific and Technical Information of China (English)

    Arne Pommerening; Anders Muszta

    2015-01-01

    Background:Analysing and modelling plant growth is an important interdisciplinary field of plant science. The use of relative growth rates, involving the analysis of plant growth relative to plant size, has more or less independently emerged in different research groups and at different times and has provided powerful tools for assessing the growth performance and growth efficiency of plants and plant populations. In this paper, we explore how these isolated methods can be combined to form a consistent methodology for modelling relative growth rates. Methods:We review and combine existing methods of analysing and modelling relative growth rates and apply a combination of methods to Sitka spruce (Picea sitchensis (Bong.) Carr.) stem-analysis data from North Wales (UK) and British Douglas fir (Pseudotsuga menziesi (Mirb.) Franco) yield table data. Results:The results indicate that, by combining the approaches of different plant-growth analysis laboratories and using them simultaneously, we can advance and standardise the concept of relative plant growth. Particularly the growth multiplier plays an important role in modelling relative growth rates. Another useful technique has been the recent introduction of size-standardised relative growth rates. Conclusions:Modelling relative growth rates mainly serves two purposes, 1) an improved analysis of growth performance and efficiency and 2) the prediction of future or past growth rates. This makes the concept of relative growth ideally suited to growth reconstruction as required in dendrochronology, climate change and forest decline research and for interdisciplinary research projects beyond the realm of plant science.

  5. Methods of modelling relative growth rate

    Directory of Open Access Journals (Sweden)

    Arne Pommerening

    2015-03-01

    Full Text Available Background Analysing and modelling plant growth is an important interdisciplinary field of plant science. The use of relative growth rates, involving the analysis of plant growth relative to plant size, has more or less independently emerged in different research groups and at different times and has provided powerful tools for assessing the growth performance and growth efficiency of plants and plant populations. In this paper, we explore how these isolated methods can be combined to form a consistent methodology for modelling relative growth rates. Methods We review and combine existing methods of analysing and modelling relative growth rates and apply a combination of methods to Sitka spruce (Picea sitchensis (Bong. Carr. stem-analysis data from North Wales (UK and British Douglas fir (Pseudotsuga menziesii (Mirb. Franco yield table data. Results The results indicate that, by combining the approaches of different plant-growth analysis laboratories and using them simultaneously, we can advance and standardise the concept of relative plant growth. Particularly the growth multiplier plays an important role in modelling relative growth rates. Another useful technique has been the recent introduction of size-standardised relative growth rates. Conclusions Modelling relative growth rates mainly serves two purposes, 1 an improved analysis of growth performance and efficiency and 2 the prediction of future or past growth rates. This makes the concept of relative growth ideally suited to growth reconstruction as required in dendrochronology, climate change and forest decline research and for interdisciplinary research projects beyond the realm of plant science.

  6. Modelling asteroid brightness variations. I - Numerical methods

    Science.gov (United States)

    Karttunen, H.

    1989-01-01

    A method for generating lightcurves of asteroid models is presented. The effects of the shape of the asteroid and the scattering law of a surface element are distinctly separable, being described by chosen functions that can easily be changed. The shape is specified by means of two functions that yield the length of the radius vector and the normal vector of the surface at a given point. The general shape must be convex, but spherical concavities producing macroscopic shadowing can also be modeled.

  7. Niederhauser's model for epilepsy and wavelet methods

    CERN Document Server

    Trevino, J P; Moran, J L; Murguia, J S; Rosu, H C

    2006-01-01

    Wavelets and wavelet transforms (WT) could be a very useful tool to analyze electroencephalogram (EEG) signals. To illustrate the WT method we make use of a simple electric circuit model introduced by Niederhauser, which is used to produce EEG-like signals, particularly during an epileptic seizure. The original model is modified to resemble the 10-20 derivation of the EEG measurements. WT is used to study the main features of these signals

  8. Modeling Storm Surges Using Discontinuous Galerkin Methods

    Science.gov (United States)

    2016-06-01

    discontinuous Galerkin solutions of the compressible Euler equations with applications to atmospheric simulations,” Journal of Computational Physics, vol...order continuous Galerkin methods were used for the SWE on a sphere [9]. In 2002, Giraldo et al. [10] introduced an efficient DG method for the SWE... hard time transitioning from changing bathymetry slopes causing distortions in the model to include extra line segments. The discrepancies caused us to

  9. Seroprevalence of Hepatitis a in Hemodialysis Patient Candidate for Kidney Transplant Younger Than Forty Years

    Directory of Open Access Journals (Sweden)

    Sara Abolghasemi

    2017-04-01

    Full Text Available Background: Hepatitis A is a common infection during childhood, especially in developing countries. It can cause severe complications in immunocompromised patients. Due to the increasing number of kidney transplants in the country and epidemiologic shift of HAV which was observed in previous studies, we're going to evaluate the seroprevalence of hepatitis A in hemodialysis patients less than forty years serving kidney transplant candidates to follow vaccination policy for them.Materials and Methods: In a cross sectional study during 2014-2015 hepatitis A antibody levels in hemodialysis patients less than forty years in kidney transplant candidates examined in 12 hospitals in Tehran, Iran. Their serums were tested for anti HAV IgM and IgG by ELISA kits.Results: Hepatitis A virus antibody was positive in 66 (72.5% of 91 patients. The prevalence of HAV was 0% at the range of younger than 20 and 45% in under 25 years age group. This significantly increased prevalence by increasing the age, and there was according to epidemiological shifts which were shown in other studies.Conclusion: Due to the availability of vaccine and hepatitis severe complications in immunocompromised individuals, as well as a low prevalence of positive serology in individuals under 25 years, it seems the check of antibodies in patients undergoing kidney transplantation and vaccination in seronegative persons is a logical.

  10. Correlations between cutaneous malignant melanoma and other cancers: An ecological study in forty European countries

    Directory of Open Access Journals (Sweden)

    Pablo Fernandez-Crehuet Serrano

    2016-01-01

    Full Text Available Background: The presence of noncutaneous neoplasms does not seem to increase the risk of cutaneous malignant melanoma; however, it seems to be associated with the development of other hematological, brain, breast, uterine, and prostatic neoplasms. An ecological transversal study was conducted to study the geographic association between cutaneous malignant melanoma and 24 localizations of cancer in forty European countries. Methods: Cancer incidence rates were extracted from GLOBOCAN database of the International Agency for Research on Cancer. We analyzed the age-adjusted and gender-stratified incidence rates for different localizations of cancer in forty European countries and calculated their correlation using Pearson′s correlation test. Results: In males, significant correlations were found between cutaneous malignant melanoma with testicular cancer (r = 0.83 [95% confidence interval (CI: 0.68-0.89], myeloma (r = 0.68 [95% CI: 0.46-0.81], prostatic carcinoma (r = 0.66 [95% CI: 0.43-0.80], and non-Hodgkin lymphoma (NHL (r = 0.63 [95% CI: 0.39-0.78]. In females, significant correlations were found between cutaneous malignant melanoma with breast cancer (r = 0.80 [95% CI: 0.64-0.88], colorectal cancer (r = 0.72 [95% CI: 0.52-0.83], and NHL (r = 0.71 [95% CI: 0.50-0.83]. Conclusions: These correlations call to conduct new studies about the epidemiology of cancer in general and cutaneous malignant melanoma risk factors in particular.

  11. Reflections on forty years of education in Spain, or the irresistible attraction of laws

    Directory of Open Access Journals (Sweden)

    Manuel Puelles Benítez

    2015-07-01

    Full Text Available In almost forty years of democracy, educational policy in Spain has givenrise to a phenomenon that has produced effects quite the opposite from thosethat were sought, with an excess of educational laws resulting in remarkableand constant legislative instability. This paper analyses the underlying reasonsfor this phenomenon, particularly the policies of the two major national partiesand the embodiment in education laws of their systemic models of education,models which clearly bear the stamp of their respective ideologies. This has inevitablyled to legislative reforms when the electorate has voted for a change ofgovernment. This analysis points to the need for a new consensus on educationto ensure the effective implementation of the reforms launched by these laws.

  12. Methods in Model Order Reduction (MOR) field

    Institute of Scientific and Technical Information of China (English)

    刘志超

    2014-01-01

    Nowadays, the modeling of systems may be quite large, even up to tens of thousands orders. In spite of the increasing computational powers, direct simulation of these large-scale systems may be impractical. Thus, to industry requirements, analytically tractable and computationally cheap models must be designed. This is the essence task of Model Order Reduction (MOR). This article describes the basics of MOR optimization, various way of designing MOR, and gives the conclusion about existing methods. In addition, it proposed some heuristic footpath.

  13. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains...

  14. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This mo

  15. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This mo

  16. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This

  17. Approximate Methods for State-Space Models

    CERN Document Server

    Koyama, Shinsuke; Shalizi, Cosma Rohilla; Kass, Robert E; 10.1198/jasa.2009.tm08326

    2010-01-01

    State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This {\\em Laplace-Gaussian filter} (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulat...

  18. Internet Resource Pricing Models, Mechanisms, and Methods

    CERN Document Server

    He, Huan; Liu, Ying

    2011-01-01

    With the fast development of video and voice network applications, CDN (Content Distribution Networks) and P2P (Peer-to-Peer) content distribution technologies have gradually matured. How to effectively use Internet resources thus has attracted more and more attentions. For the study of resource pricing, a whole pricing strategy containing pricing models, mechanisms and methods covers all the related topics. We first introduce three basic Internet resource pricing models through an Internet cost analysis. Then, with the evolution of service types, we introduce several corresponding mechanisms which can ensure pricing implementation and resource allocation. On network resource pricing methods, we discuss the utility optimization in economics, and emphasize two classes of pricing methods (including system optimization and entities' strategic optimizations). Finally, we conclude the paper and forecast the research direction on pricing strategy which is applicable to novel service situation in the near future.

  19. AN EFFECTIVE HUMAN LEG MODELING METHOD

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Digital medicine is a new concept in medical field, and the need for digital human body is increasing these years. This paper used Free Form Deformation (FFD) to model the motion of human leg. It presented the motion equations of knee joint on the basis of anatomic structure and motion characters, then transmitted the deformation to the mesh of leg through a simplified FFD that only used two-order B-spline basis function. The experiments prove that this method can simulate the bend of leg and the deformation of muscles fairly well. Compared with the method of curved patches, this method is more convenient and effective. Further more, those equations can be easily applied to other joint models of human body.

  20. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    FMO problem formulations with stress constraints. These problems are highly nonlinear and lead to the so-called singularity phenomenon. The method described in the thesis has successfully solved these problems. In the numerical experiments the stress constraints have been satisfied with high...... conditions for physical attainability, in the context that, it has to be symmetric and positive semidefinite. FMO problems have been studied for the last two decades in many articles that led to the development of a wide range of models, methods, and theories. As the design variables in FMO are the local....... These problems are more difficult to solve and demand higher computational efforts than the standard optimization problems. The focus of today’s development of solution methods for FMO problems is based on first-order methods that require a large number of iterations to obtain optimal solutions. The scope...

  1. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  2. Forty Cases of Gastrointestinal Neurosis Treated by Acupunture

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Objective: To compare the therapeutic effect of acupuncture for gastrointestinal neurosis with that of oral remedy. Methods: Eighty cases were randomly divided into the following 2 groups. In the treatment group, acupuncture was given for one month at the main points of Zhongwan (CV 12), Zusanli (ST 36), Taichong (LR 3) and Shenmen (HT 7), with the auxiliary points selected according to TCM differentiation. In the control group, Domperidone was orally administered for one month. Results: The total effective rate was 92.5% in the treatment group and 75.0% in the control group, with a significant difference between the 2 groups (χ2=4.423, P<0.05). Acupuncture was superior to the oral remedy in therapeutic effects. Conclusions: Acupuncture may show better results for gastrointestinal neurosis and with less toxic side effects.

  3. Isolated tuberculous epididymitis: A review of forty cases

    Directory of Open Access Journals (Sweden)

    Viswaroop B

    2005-01-01

    Full Text Available Background: Tuberculous epididymitis is one of the causes of chronic epididymal lesions. It is difficult to diagnose in the absence of renal involvement. Aim : To profile isolated tuberculous epididymitis and to assess our approach in the evaluation of this group of patients. Setting and Design : Retrospective study done at Christian Medical College, Vellore, South India. Methods and Materials : Between 1992 and 2002, 156 fine needle aspiration cytology specimens and 108 epididymal biopsies were carried out in 187 men for evaluation of chronic epididymal nodules. Isolated epididymal tuberculosis was defined as "tuberculous infection affecting the epididymis without evidence of renal involvement as documented by the absence of acid fast bacilli in the urine sample and on imaging". The age, laterality, mode of presentation and method of histological diagnosis were studied with the objective of profiling isolated tuberculous epididymitis. Results : Fifty-four of the 187 men (median age 32 years; interquartile range: 21-37 years had tuberculous epididymitis. Fourteen were excluded from the analysis (10 had associated urinary tract tuberculosis and 4 were lost to follow-up. None of the 40 men with isolated tuberculous epididymitis had urinary symptoms. Bilateral involvement was seen in five (12.5% cases. The salient presenting features included painful swelling (16 subjects, 40%, scrotal sinus (4, 20% and acute epididymitis (2, 10%. Past history or concomitant presence of tuberculosis was noted in three subjects each. Anti TB treatment resulted in a complete response in 10 and partial response in 18. Five subjects underwent epididymectomy. Tuberculous epididymitis was found incidentally in 5 (10% cases on high orchiectomy specimen done for suspected testicular tumour. Conclusions : Tuberculous epididymitis can be the sole presentation of genitourinary tuberculosis.

  4. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  5. Multiphase Transformer Modelling using Finite Element Method

    Directory of Open Access Journals (Sweden)

    Nor Azizah Mohd Yusoff

    2015-03-01

    Full Text Available In the year of 1970 saw the starting invention of the five-phase motor as the milestone in advanced electric motor. Through the years, there are many researchers, which passionately worked towards developing for multiphase drive system. They developed a static transformation system to obtain a multiphase supply from the available three-phase supply. This idea gives an influence for further development in electric machines as an example; an efficient solution for bulk power transfer. This paper highlighted the detail descriptions that lead to five-phase supply with fixed voltage and frequency by using Finite-Element Method (FEM. Identifying of specification on a real transformer had been done before applied into software modeling. Therefore, Finite-Element Method provides clearly understandable in terms of visualize the geometry modeling, connection scheme and output waveform.

  6. Resonant Transmission Line Method for Econophysics models

    CERN Document Server

    Raptis, T E

    2016-01-01

    In a recent paper [1304.6846], Racorean introduced a formal similarity of the Black-Sholes stock pricing model with a Schr\\"odinger equation. We use a previously introduced method of a resonant transmission line for arbitrary 2nd order Sturm-Liouville problems to attack the same problem from a different perspective revealing some deep structures in the naturally associated eigenvalue problem.

  7. Modeling Open Software Architectures of Robot Controllers: A Brief Survey of Modeling Methods and Developing Methods

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Openness is one of the features of modern robot controllers. Although many modeling technologies have been discussed to model and develop open robot controllers, the focus is always on modeling methodologies. Meanwhile, the relations between the former and the latter are usually ignored. According to the general software architecture of open robot controllers, this paper discusses modeling and developing methods. And the relationships between the typical ones are also analyzed.

  8. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  9. Bayesian structural equation modeling method for hierarchical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang Xiaomo [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: xiaomo.jiang@vanderbilt.edu; Mahadevan, Sankaran [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: sankaran.mahadevan@vanderbilt.edu

    2009-04-15

    A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.

  10. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  11. Generic Sensor Modeling Using Pulse Method

    Science.gov (United States)

    Helder, Dennis L.; Choi, Taeyoung

    2005-01-01

    Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor

  12. Approximate Methods for State-Space Models.

    Science.gov (United States)

    Koyama, Shinsuke; Pérez-Bolde, Lucia Castellanos; Shalizi, Cosma Rohilla; Kass, Robert E

    2010-03-01

    State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.

  13. Engineering design of systems models and methods

    CERN Document Server

    Buede, Dennis M

    2009-01-01

    The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...

  14. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  15. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains...... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a conflict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....

  16. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains...... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a con ict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....

  17. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains...... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a con ict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....

  18. A PSL Bounded Model Checking Method

    Institute of Scientific and Technical Information of China (English)

    YU Lei; ZHAO Zongtao

    2012-01-01

    SAT-based bounded model checking (BMC) is introduced as an important complementary technique to OBDD-based symbolic model checking, and is an efficient verification method for parallel and reactive systems. However, until now the properties verified by bounded model checking are very finite. Temporal logic PSL is a property specification language (IEEE-1850) describing parallel systems and is divided into two parts, i.e. the linear time logic FL and the branch time logic OBE. In this paper, the specification checked by BMC is extended to PSL and its algorithm is also proposed. Firstly, define the bounded semantics of PSL, and then reduce the bounded semantics into SAT by translating PSL specification formula and the state transition relation of the system to the propositional formula A and B, respectively. Finally, verify the satisfiability of the conjunction propositional formula of A and B. The algorithm results in the translation of the existential model checking of the temporal logic PSL into the satisfiability problem of propositional formula. An example of a queue controlling circuit is used to interpret detailedly the executing procedure of the algorithm.

  19. Finite element modeling methods for photonics

    CERN Document Server

    Rahman, B M Azizur

    2013-01-01

    The term photonics can be used loosely to refer to a vast array of components, devices, and technologies that in some way involve manipulation of light. One of the most powerful numerical approaches available to engineers developing photonic components and devices is the Finite Element Method (FEM), which can be used to model and simulate such components/devices and analyze how they will behave in response to various outside influences. This resource provides a comprehensive description of the formulation and applications of FEM in photonics applications ranging from telecommunications, astron

  20. Mechanics, Models and Methods in Civil Engineering

    CERN Document Server

    Maceri, Franco

    2012-01-01

    „Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.

  1. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  2. Intercomparison of prediction skills of ensemble methods using monthly mean temperature simulated by CMIP5 models

    Science.gov (United States)

    Seong, Min-Gyu; Suh, Myoung-Seok; Kim, Chansoo

    2017-08-01

    This study focuses on an objective comparison of eight ensemble methods using the same data, training period, training method, and validation period. The eight ensemble methods are: BMA (Bayesian Model Averaging), HMR (Homogeneous Multiple Regression), EMOS (Ensemble Model Output Statistics), HMR+ with positive coefficients, EMOS+ with positive coefficients, PEA_ROC (Performance-based Ensemble Averaging using ROot mean square error and temporal Correlation coefficient), WEA_Tay (Weighted Ensemble Averaging based on Taylor's skill score), and MME (Multi-Model Ensemble). Forty-five years (1961-2005) of data from 14 CMIP5 models and APHRODITE (Asian Precipitation- Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources) data were used to compare the performance of the eight ensemble methods. Although some models underestimated the variability of monthly mean temperature (MMT), most of the models effectively simulated the spatial distribution of MMT. Regardless of training periods and the number of ensemble members, the prediction skills of BMA and the four multiple linear regressions (MLR) were superior to the other ensemble methods (PEA_ROC, WEA_Tay, MME) in terms of deterministic prediction. In terms of probabilistic prediction, the four MLRs showed better prediction skills than BMA. However, the differences among the four MLRs and BMA were not significant. This resulted from the similarity of BMA weights and regression coefficients. Furthermore, prediction skills of the four MLRs were very similar. Overall, the four MLRs showed the best prediction skills among the eight ensemble methods. However, more comprehensive work is needed to select the best ensemble method among the numerous ensemble methods.

  3. Development of modelling method selection tool for health services management: From problem structuring methods to modelling and simulation methods

    Directory of Open Access Journals (Sweden)

    Naseer Aisha

    2011-05-01

    Full Text Available Abstract Background There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. Aim The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. Methods This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data. Results The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. Conclusions A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  4. The accuracy of a method for printing three-dimensional spinal models.

    Directory of Open Access Journals (Sweden)

    Ai-Min Wu

    Full Text Available To study the morphology of the human spine and new spinal fixation methods, scientists require cadaveric specimens, which are dependent on donation. However, in most countries, the number of people willing to donate their body is low. A 3D printed model could be an alternative method for morphology research, but the accuracy of the morphology of a 3D printed model has not been determined.Forty-five computed tomography (CT scans of cervical, thoracic and lumbar spines were obtained, and 44 parameters of the cervical spine, 120 parameters of the thoracic spine, and 50 parameters of the lumbar spine were measured. The CT scan data in DICOM format were imported into Mimics software v10.01 for 3D reconstruction, and the data were saved in .STL format and imported to Cura software. After a 3D digital model was formed, it was saved in Gcode format and exported to a 3D printer for printing. After the 3D printed models were obtained, the above-referenced parameters were measured again.Paired t-tests were used to determine the significance, set to P0.800. The other ICC values were 0.600; none were <0.600.In this study, we provide a protocol for printing accurate 3D spinal models for surgeons and researchers. The resulting 3D printed model is inexpensive and easily obtained for spinal fixation research.

  5. System and method of designing models in a feedback loop

    Energy Technology Data Exchange (ETDEWEB)

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  6. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  7. Method of and apparatus for modeling interactions

    Science.gov (United States)

    Budge, Kent G.

    2004-01-13

    A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.

  8. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  9. Forty-Four Pass Fibre Optic Loop for Improving the Sensitivity of Surface Plasmon Resonance Sensors

    CERN Document Server

    Su, Chin B

    2007-01-01

    A forty-four pass fibre optic surface plasmon resonance sensor that enhances detection sensitivity according to the number of passes is demonstrated for the first time. The technique employs a fibre optic recirculation loop that passes the detection spot forty- four times, thus enhancing sensitivity by a factor of forty-four. Presently, the total number of passes is limited by the onset of lasing action of the recirculation loop. This technique offers a significant sensitivity improvement for various types of plasmon resonance sensors that may be used in chemical and biomolecule detections.

  10. Global Forty-Years Validation of Seasonal Precipitation Forecasts: Assessing El Ni\\~no-Driven Skill

    CERN Document Server

    Manzanas, R; Cofiño, A S; Gutiérrez, J M

    2013-01-01

    The skill of seasonal precipitation forecasts is assessed worldwide -grid point by grid point- for the forty-years period 1961-2000. To this aim, the ENSEMBLES multi-model hindcast is considered. Although predictability varies with region, season and lead-time, results indicate that 1) significant skill is mainly located in the tropics -20 to 40% of the total land areas-, 2) overall, SON (MAM) is the most (less) skillful season and 3) predictability does not decrease noticeably from one to four months lead-time -this is so especially in northern south America and the Malay archipelago, which seem to be the most skillful regions of the world-. An analysis of teleconnections revealed that most of the skillful zones exhibit significant teleconnections with El Ni\\~no. Furthermore, models are shown to reproduce similar teleconnection patterns to those observed, especially in SON -with spatial correlations of around 0.6 in the tropics-. Moreover, these correlations are systematically higher for the skillful areas. ...

  11. Modeling of nanoplastic by asymptotic homogenization method

    Institute of Scientific and Technical Information of China (English)

    张为民; 何伟; 李亚; 张平; 张淳源

    2008-01-01

    The so-called nanoplastic is a new simple name for the polymer/layered silicate nanocomposite,which possesses excellent properties.The asymptotic homogenization method(AHM) was applied to determine numerically the effective elastic modulus of a two-phase nanoplastic with different particle aspect ratios,different ratios of elastic modulus of the effective particle to that of the matrix and different volume fractions.A simple representative volume element was proposed,which is assumed that the effective particles are uniform well-aligned and perfectly bonded in an isotropic matrix and have periodic structure.Some different theoretical models and the experimental results were compared.The numerical results are good in agreement with the experimental results.

  12. Mathematical models and methods for planet Earth

    CERN Document Server

    Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta

    2014-01-01

    In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.

  13. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  14. Reporting methods in studies developing prognostic models in cancer: a review

    Directory of Open Access Journals (Sweden)

    Waters Rachel

    2010-03-01

    Full Text Available Abstract Background Development of prognostic models enables identification of variables that are influential in predicting patient outcome and the use of these multiple risk factors in a systematic, reproducible way according to evidence based methods. The reliability of models depends on informed use of statistical methods, in combination with prior knowledge of disease. We reviewed published articles to assess reporting and methods used to develop new prognostic models in cancer. Methods We developed a systematic search string and identified articles from PubMed. Forty-seven articles were included that satisfied the following inclusion criteria: published in 2005; aiming to predict patient outcome; presenting new prognostic models in cancer with outcome time to an event and including a combination of at least two separate variables; and analysing data using multivariable analysis suitable for time to event data. Results In 47 studies, prospective cohort or randomised controlled trial data were used for model development in only 33% (15 of studies. In 30% (14 of the studies insufficient data were available, having fewer than 10 events per variable (EPV used in model development. EPV could not be calculated in a further 40% (19 of the studies. The coding of candidate variables was only reported in 68% (32 of the studies. Although use of continuous variables was reported in all studies, only one article reported using recommended methods of retaining all these variables as continuous without categorisation. Statistical methods for selection of variables in the multivariate modelling were often flawed. A method that is not recommended, namely, using statistical significance in univariate analysis as a pre-screening test to select variables for inclusion in the multivariate model, was applied in 48% (21 of the studies. Conclusions We found that published prognostic models are often characterised by both use of inappropriate methods for development

  15. Functional methods in the generalized Dicke model

    Energy Technology Data Exchange (ETDEWEB)

    Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)]. E-mails: aparicio@cbpf.br; aluis@cbpf.br; nfuxsvai@cbpf.br

    2007-07-01

    The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g{sub 1} and g{sub 2} for rotating and counter-rotating terms respectively. In the limit N -> {infinity}, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g{sub 1} {ne} 0 and g{sub 2} {ne} 0). There is quantum critical behavior when the coupling constants g{sub 1} and g{sub 2} satisfy g{sub 1} + g{sub 2}=({omega}{sub 0} {omega}){sup 1/2}, where {omega}{sub 0} is the frequency of the mode of the field and {omega} is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g{sub 1} {ne} 0 and g{sub 2} {ne} 0, recovering the well known results. Second, the case g{sub 1} {ne} 0 and g{sub 2} {ne} 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g{sub 2} ({omega}{sub 0} {omega}){sup 1/2}, and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)

  16. Free wake models for vortex methods

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)

    1997-08-01

    The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)

  17. CONTROL SYSTEM IDENTIFICATION THROUGH MODEL MODULATION METHODS

    Science.gov (United States)

    identification has been achieved by using model modulation techniques to drive dynamic models into correspondence with operating control systems. The system ... identification then proceeded from examination of the model and the adaptive loop. The model modulation techniques applied to adaptive control

  18. GREENSCOPE: A Method for Modeling Chemical Process ...

    Science.gov (United States)

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  19. Hepatocyte autophagy model established by physical method

    Directory of Open Access Journals (Sweden)

    ZHU Xuemin

    2016-08-01

    Full Text Available ObjectiveTo establish the autophagy model of normal human liver cell line 7702 induced by hypoxia and starvation, and to lay a foundation for further studies on the influence of autophagy on liver function. MethodsThe 7702 cells were selected and incubated with 95% air and 5% CO2 at a temperature of 37 ℃(normal control group. The Binder three-gas incubator was used, with a temperature of 37 ℃, a CO2 concentration of 5%, and an O2 concentration of 0.3% to provide a hypoxic environment, and the serum-free DMEM was used to induce starvation. These cells were divided into 6-, 12-, 18-, and 24-hour hypoxia-starvation groups. Western blot was used to measure the protein expression of Beclin 1, Atg5, and LC3 in the normal control group and experimental groups, RT-qPCR was used to measure the mRNA expression of Beclin 1 and Atg5 in each group, and after transfection of LC3 plasmid, immunofluorescence assay was used to observe autophagy in each group. An analysis of variance was used for comparison of continuous data between groups, and the least significant difference t-test was used for further comparison between any two groups; the chi-square test was used for comparison of categorical data between groups. ResultsThe 6-hour hypoxia-starvation groups had higher protein expression of Beclin 1, Atg5, and LC3 than the normal control group or other treated groups. Compared with all the other groups, the 6-hour hypoxia-starvation group showed significantly increased mRNA expression of Beclin 1 and Atg5, as well as significantly greater increases in the mRNA expression of Beclin 1 and Atg5 (all P<0.05. The hypoxia-starvation groups had significantly lower numbers of autophagosomes than the normal control group, and the 6-hour hypoxia-starvation group had the highest number of autophagosomes (all P<0.05. ConclusionHypoxia and starvation established by physical methods can successfully induce hepatocyte autophagy, which is the most remarkable at 6

  20. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  1. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  2. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  3. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... but with clearly defined failure modes, the MCFM can bestarted from each idealized single mode limit state in turn to identify a locally most central point on the elaborate limitstate surface. Typically this procedure leads to a fewer number of locally most central failure points on the elaboratelimit state...... surface than existing in the idealized model....

  4. Integrated Enterprise Modeling Method Based on Workflow Model and Multiviews%Integrated Enterprise Modeling Method Based on Workflow Model and Multiviews

    Institute of Scientific and Technical Information of China (English)

    林慧苹; 范玉顺; 吴澄

    2001-01-01

    Many enterprise modeling methods are proposed to model thebusiness process of enterprises and to implement CIM systems. But difficulties are still encountered when these methods are applied to the CIM system design and implementation. This paper proposes a new integrated enterprise modeling methodology based on the workflow model. The system architecture and the integrated modeling environment are described with a new simulation strategy. The modeling process and the relationship between the workflow model and the views are discussed.

  5. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  6. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...

  7. Numerical methods in Markov chain modeling

    Science.gov (United States)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  8. Dynamic spatial panels : models, methods, and inferences

    NARCIS (Netherlands)

    Elhorst, J. Paul

    This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent

  9. A method to evaluate response models

    NARCIS (Netherlands)

    Bruijnes, Merijn; Wapperom, Sjoerd; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Bickmore, Timothy; Marcella, Stacy; Sidner, Candace

    We are working towards computational models of mind of virtual characters that act as suspects in interview (interrogation) training of police officers. We implemented a model that calculates the responses of the virtual suspect based on theory and observation. We evaluated it by means of our test,

  10. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...

  11. Model and method for optimizing heterogeneous systems

    Science.gov (United States)

    Antamoshkin, O. A.; Antamoshkina, O. A.; Zelenkov, P. V.; Kovalev, I. V.

    2016-11-01

    Methodology of distributed computing performance boost by reduction of delays number is proposed. Concept of n-dimentional requirements triangle is introduced. Dynamic mathematical model of resource use in distributed computing systems is described.

  12. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, R.J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current diversi

  13. A Generalized Rough Set Modeling Method for Welding Process

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Modeling is essential, significant and difficult for the quality and shaping control of arc welding process. A generalized rough set based modeling method was brought forward and a dynamic predictive model for pulsed gas tungsten arc welding (GTAW) was obtained by this modeling method. The results show that this modeling method can well acquire knowledge in welding and satisfy the real life application. In addition, the results of comparison between classic rough set model and back-propagation neural network model respectively are also satisfying.

  14. Particle Methods for Atmosphere and Ocean Modeling

    Science.gov (United States)

    2015-02-26

    particle method (LPM) for geophysical fluid flow simulations on a rotating sphere. The method is potentially relevant to Naval operations that rely on...simulations are based mainly on Eulerian and semi-Lagrangian mesh -based schemes, and the numerical results often exhibit diffusive and dispersive...errors commonly seen with mesh -based schemes and this is demonstrated in the articles cited below. We apphed LPM to solve the barotropic vorticity

  15. Tensor Models: extending the matrix models structures and methods

    CERN Document Server

    Dartois, Stephane

    2016-01-01

    In this text we review a few structural properties of matrix models that should at least partly generalize to random tensor models. We review some aspects of the loop equations for matrix models and their algebraic counterpart for tensor models. Despite the generic title of this review, we, in particular, invoke the Topological Recursion. We explain its appearance in matrix models. Then we state that a family of tensor models provides a natural example which satisfies a version of the most general form of the topological recursion, named the blobbed topological recursion. We discuss the difficulties of extending the technical solutions existing for matrix models to tensor models. Some proofs are not published yet but will be given in a coming paper, the rest of the results are well known in the literature.

  16. Model-based methods for linkage analysis.

    Science.gov (United States)

    Rice, John P; Saccone, Nancy L; Corbett, Jonathan

    2008-01-01

    The logarithm of an odds ratio (LOD) score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential so that pedigrees or LOD curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders where the maximum LOD score statistic shares some of the advantages of the traditional LOD score approach, but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the LOD score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  17. 5 CFR 630.302 - Maximum annual leave accumulation-forty-five day limitation.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum annual leave accumulation-forty-five day limitation. 630.302 Section 630.302 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Annual Leave § 630.302 Maximum annual leave...

  18. Reverberation Modelling Using a Parabolic Equation Method

    Science.gov (United States)

    2012-10-01

    et possiblement des échos de cibles. L’objet du présent contrat est une étude du recours à un modèle à équation parabolique, en particulier le...obtained by the ‘PE method’ were primarily compared to results obtained from a proprietary ray-based model provided by Brooke Numerical Services (BNS... Services . Target echo estimates are also compared to the BNS ray model result. In all cases but one the reference data is plotted as a solid red line

  19. Extrudate Expansion Modelling through Dimensional Analysis Method

    DEFF Research Database (Denmark)

    A new model framework is proposed to correlate extrudate expansion and extrusion operation parameters for a food extrusion cooking process through dimensional analysis principle, i.e. Buckingham pi theorem. Three dimensionless groups, i.e. energy, water content and temperature, are suggested...... to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...

  20. Marginal linearization method in modeling on fuzzy control systems

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Marginal linearization method in modeling on fuzzy control systems is proposed, which is to deal with the nonlinear model with variable coefficients. The method can turn a nonlinear model with variable coefficients into a linear model with variable coefficients in the way that the membership functions of the fuzzy sets in fuzzy partitions of the universes are changed from triangle waves into rectangle waves. However, the linearization models are incomplete in their forms because of their lacking some items. For solving this problem, joint approximation by using linear models is introduced. The simulation results show that marginal linearization models are of higher approximation precision than their original nonlinear models.

  1. Diagnostic and prognostic models: applications and methods

    NARCIS (Netherlands)

    Zuithoff, N.P.A.|info:eu-repo/dai/nl/313995494

    2012-01-01

    Prediction modelling, both diagnostic and prognostic, has become a major topic in clinical research and practice. Traditionally, clinicians intuitively combine and judge the documented patient information, on e.g. risk factors and test results, to implicitly assess the probability or risk of having

  2. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on t

  3. A Parametric Modelling Method for Dexterous Finger Reachable Workspaces

    OpenAIRE

    2016-01-01

    The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finge...

  4. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  5. Designerly Visualisation: Conceptions, Methods, Models, Perceptions

    NARCIS (Netherlands)

    Breen, J.L.H.

    2013-01-01

    If we wish to reach a deeper, more objective understanding of the phenomena of Architectural and Environmental Design, we need to develop and apply working methods that allow us to imaginatively analyse and consequently envision the formal issues which are at (inter)play: demonstrating their working

  6. Review of Nonlinear Methods and Modelling

    CERN Document Server

    Borg, F G

    2005-01-01

    The first part of this Review describes a few of the main methods that have been employed in non-linear time series analysis with special reference to biological applications (biomechanics). The second part treats the physical basis of posturogram data (human balance) and EMG (electromyography, a measure of muscle activity).

  7. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  8. Bayesian methods for model choice and propagation of model uncertainty in groundwater transport modeling

    Science.gov (United States)

    Mendes, B. S.; Draper, D.

    2008-12-01

    The issue of model uncertainty and model choice is central in any groundwater modeling effort [Neuman and Wierenga, 2003]; among the several approaches to the problem we favour using Bayesian statistics because it is a method that integrates in a natural way uncertainties (arising from any source) and experimental data. In this work, we experiment with several Bayesian approaches to model choice, focusing primarily on demonstrating the usefulness of the Reversible Jump Markov Chain Monte Carlo (RJMCMC) simulation method [Green, 1995]; this is an extension of the now- common MCMC methods. Standard MCMC techniques approximate posterior distributions for quantities of interest, often by creating a random walk in parameter space; RJMCMC allows the random walk to take place between parameter spaces with different dimensionalities. This fact allows us to explore state spaces that are associated with different deterministic models for experimental data. Our work is exploratory in nature; we restrict our study to comparing two simple transport models applied to a data set gathered to estimate the breakthrough curve for a tracer compound in groundwater. One model has a mean surface based on a simple advection dispersion differential equation; the second model's mean surface is also governed by a differential equation but in two dimensions. We focus on artificial data sets (in which truth is known) to see if model identification is done correctly, but we also address the issues of over and under-paramerization, and we compare RJMCMC's performance with other traditional methods for model selection and propagation of model uncertainty, including Bayesian model averaging, BIC and DIC.References Neuman and Wierenga (2003). A Comprehensive Strategy of Hydrogeologic Modeling and Uncertainty Analysis for Nuclear Facilities and Sites. NUREG/CR-6805, Division of Systems Analysis and Regulatory Effectiveness Office of Nuclear Regulatory Research, U. S. Nuclear Regulatory Commission

  9. Hazard Response Modeling Uncertainty (A Quantitative Method)

    Science.gov (United States)

    1988-10-01

    ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be

  10. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  11. ICA Model Order Estimation Using Clustering Method

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2007-12-01

    Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level α = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.

  12. Reduced Order Modeling Methods for Turbomachinery Design

    Science.gov (United States)

    2009-03-01

    be discussed in Subsection 1.2.3, which assume normally distributed response. Results also showed sig- nif cant mistuning located at the frequency of...conventionally def ned with Probability Density Functions ( PDF ) and the output are response statistics and PDF . It is an alter- native to deterministic analysis...certainty typically refers to def ning PDF for input parameters, but there are also non- probability based methods for quantif cation such as Bayesian [45

  13. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    -friendly system, which will make the model development process easier and faster and provide the way for unified and consistent model documentation. The modeller can use the template for their specific problem or to extend and/or adopt a model. This is based on the idea of model reuse, which emphasizes the use...... and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer...... aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation; 4) model transfer – export/import to/from other application for further extension and application – several types of formats, such as XML...

  14. Diffusion in condensed matter methods, materials, models

    CERN Document Server

    Kärger, Jörg

    2005-01-01

    Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.

  15. Precise methods for conducted EMI modeling,analysis,and prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0―10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  16. Precise methods for conducted EMI modeling,analysis, and prediction

    Institute of Scientific and Technical Information of China (English)

    MA WeiMing; ZHAO ZhiHua; MENG Jin; PAN QiJun; ZHANG Lei

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0-10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  17. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  18. Estimating Tree Height-Diameter Models with the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    2014-01-01

    Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  19. Stability analysis of cosmological models through Liapunov's method

    CERN Document Server

    Charters, T C; Mimoso, J P; Charters, Tiago C.; Mimoso, Jose P.

    2001-01-01

    We investigate the general asymptotic behaviour of Friedman-Robertson-Walker (FRW) models with an inflaton field, scalar-tensor FRW cosmological models and diagonal Bianchi-IX models by means of Liapunov's method. This method provides information not only about the asymptotic stability of a given equilibrium point but also about its basin of attraction. This cannot be obtained by the usual methods found in the literature, such as linear stability analysis or first order perturbation techniques. Moreover, Liapunov's method is also applicable to non-autonomous systems. We use this advantadge to investigate the mechanism of reheating for the inflaton field in FRW models.

  20. Laser filamentation mathematical methods and models

    CERN Document Server

    Lorin, Emmanuel; Moloney, Jerome

    2016-01-01

    This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...

  1. Exposure-response modeling methods and practical implementation

    CERN Document Server

    Wang, Jixian

    2015-01-01

    Discover the Latest Statistical Approaches for Modeling Exposure-Response RelationshipsWritten by an applied statistician with extensive practical experience in drug development, Exposure-Response Modeling: Methods and Practical Implementation explores a wide range of topics in exposure-response modeling, from traditional pharmacokinetic-pharmacodynamic (PKPD) modeling to other areas in drug development and beyond. It incorporates numerous examples and software programs for implementing novel methods.The book describes using measurement

  2. Research on the modeling method of soybean leafs structure simulation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Leaf is one of the most important organs of soybean. The modeling of soybean leaf structure is useful to research of leaf function. The paper discussed it from two aspects that were distilling method of leaf profile and establishing method of leaf simulation model. It put forward basic method of soybean leaf digital process, and successfully established simulation model of soybean leaf structure based on L-system. It also solved a critical problem in the process of establishing soybean growth simulation model. And the research had guiding significance to establishment of soybean plant model.

  3. New Retrieval Method Based on Relative Entropy for Language Modeling with Different Smoothing Methods

    Institute of Scientific and Technical Information of China (English)

    Huo Hua; Liu Junqiang; Feng Boqin

    2006-01-01

    A language model for information retrieval is built by using a query language model to generate queries and a document language model to generate documents. The documents are ranked according to the relative entropies of estimated document language models with respect to the estimated query language model. Two popular and relatively efficient smoothing methods, the JelinekMercer method and the absolute discounting method, are used to smooth the document language model in estimation of the document language. A combined model composed of the feedback document language model and the collection language model is used to estimate the query model. A performacne comparison between the new retrieval method and the existing method with feedback is made,and the retrieval performances of the proposed method with the two different smoothing techniques are evaluated on three Text Retrieval Conference (TREC) data sets. Experimental results show that the method is effective and performs better than the basic language modeling approach; moreover, the method using the Jelinek-Mercer technique performs better than that using the absolute discounting technique, and the perfomance is sensitive to the smoothing paramters.

  4. FORTY PLUS CLUBS AND WHITE-COLLAR MANHOOD DURING THE GREAT DEPRESSION

    Directory of Open Access Journals (Sweden)

    Gregory Wood

    2008-01-01

    Full Text Available As scholars of gender and labor have argued, chronic unemployment during the Great Depression precipitated a “crisis” of masculinity, compelling men to turn towards new industrial unions and the New Deal as ways to affirm work, breadwinning, and patriarchy as bases for manhood. But did all men experience this crisis? During the late 1930s, white-collar men organized groups called “Forty Plus Clubs” in response to their worries about joblessness and manhood. The clubs made it possible for unemployed executives to find new jobs, while at the same time recreating the male-dominated culture of the white-collar office. For male executives, Forty Plus Clubs precluded the Depression-era crisis of manhood, challenging the idea that the absence ofpaid employment was synonymous with the loss of masculinity.

  5. Modeling of indoor/outdoor fungi relationships in forty-four homes

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, M.J.

    1996-12-31

    From April through October 1994, a study was conducted in the Moline, Illinois-Bettendorf, Iowa area to measure bioaerosol concentrations in 44 homes housing a total of 54 asthmatic individuals. Air was sampled 3 to 10 times at each home over a period of seven months. A total of 852 pairs of individual samples were collected indoors at up to three locations (basement, kitchen, bedroom, or living room) and outside within two meters of each house.

  6. Migración y heterotopía en la obra de Nisa Forti

    Directory of Open Access Journals (Sweden)

    Margherita Cannavacciuolo

    2014-06-01

    Full Text Available This paper takes into account the novel La crisálida (1984 written by the Italian women writer Nisa Forti (1934-2009 emigrated to Argentina in 1948. The study analyses how writing presents migration’s experience through the appropriation of mechanisms related to heterotopy. By doing so, the novel advances a reformulation of utopian and dystopian relationships between original country and destination country.

  7. "Method, system and storage medium for generating virtual brick models"

    DEFF Research Database (Denmark)

    2009-01-01

    An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...... of the original virtual brick model. Routines are provided for both storing user created building steps in and generating automated building instructions for virtual brick models, generating a bill of materials for a virtual brick model and ordering physical bricks corresponding to a virtual brick model....

  8. A Parametric Modelling Method for Dexterous Finger Reachable Workspaces

    Directory of Open Access Journals (Sweden)

    Wenzhen Yang

    2016-03-01

    Full Text Available The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finger reachable workspace, and uses a joint feature recognition algorithm to extract the kinematical parameters of the dexterous finger. Compared with graphic, analytical and numerical methods, this parametric modelling method can automatically and conveniently construct a more vivid workspace’ forms and contours of the dexterous finger. The main contribution of this paper is that a workspace-modelling tool with high interactive efficiency is developed for designers to precisely visualize the dexterous-finger reachable workspace, which is valuable for analysing the flexibility of the dexterous finger.

  9. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  10. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  11. Assessment of substitution model adequacy using frequentist and Bayesian methods.

    Science.gov (United States)

    Ripplinger, Jennifer; Sullivan, Jack

    2010-12-01

    In order to have confidence in model-based phylogenetic methods, such as maximum likelihood (ML) and Bayesian analyses, one must use an appropriate model of molecular evolution identified using statistically rigorous criteria. Although model selection methods such as the likelihood ratio test and Akaike information criterion are widely used in the phylogenetic literature, model selection methods lack the ability to reject all models if they provide an inadequate fit to the data. There are two methods, however, that assess absolute model adequacy, the frequentist Goldman-Cox (GC) test and Bayesian posterior predictive simulations (PPSs), which are commonly used in conjunction with the multinomial log likelihood test statistic. In this study, we use empirical and simulated data to evaluate the adequacy of common substitution models using both frequentist and Bayesian methods and compare the results with those obtained with model selection methods. In addition, we investigate the relationship between model adequacy and performance in ML and Bayesian analyses in terms of topology, branch lengths, and bipartition support. We show that tests of model adequacy based on the multinomial likelihood often fail to reject simple substitution models, especially when the models incorporate among-site rate variation (ASRV), and normally fail to reject less complex models than those chosen by model selection methods. In addition, we find that PPSs often fail to reject simpler models than the GC test. Use of the simplest substitution models not rejected based on fit normally results in similar but divergent estimates of tree topology and branch lengths. In addition, use of the simplest adequate substitution models can affect estimates of bipartition support, although these differences are often small with the largest differences confined to poorly supported nodes. We also find that alternative assumptions about ASRV can affect tree topology, tree length, and bipartition support. Our

  12. Adjoint method for hybrid guidance loop state-space models

    NARCIS (Netherlands)

    Weiss, M.; Bucco, D.

    2015-01-01

    A framework is introduced to develop the theory of the adjoint method for models including both continuous and discrete dynamics. The basis of this framework consists of the class of impulsive linear dynamic systems. It allows extension of the adjoint method to more general models that include multi

  13. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  14. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  15. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix...

  16. An efficient method for solving fractional Hodgkin–Huxley model

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, A.M., E-mail: abdelhameed_nagy@yahoo.com [Department of Mathematics, Faculty of Science, Benha University, 13518 Benha (Egypt); Sweilam, N.H., E-mail: n_sweilam@yahoo.com [Department of Mathematics, Faculty of Science, Cairo University, 12613 Giza (Egypt)

    2014-06-13

    In this paper, we present an accurate numerical method for solving fractional Hodgkin–Huxley model. A non-standard finite difference method (NSFDM) is implemented to study the dynamic behaviors of the proposed model. The Grünwald–Letinkov definition is used to approximate the fractional derivatives. Numerical results are presented graphically reveal that NSFDM is easy to implement, effective and convenient for solving the proposed model. - Highlights: • An accurate numerical method for solving fractional Hodgkin–Huxley model is given. • Non-standard finite difference method (NSFDM) is implemented to the proposed model. • NSFDM can solve differential equations involving derivatives of non-integer order. • NDFDM is very powerful and efficient technique for solving the proposed model.

  17. Verifying calculations forty years on : an overview of classical verification techniques for FEM simulations

    CERN Document Server

    Díez, Pedro

    2016-01-01

    This work provides an overview of a posteriori error assessment techniques for Finite Element (FE) based numerical models. These tools aim at estimating and controlling the discretization error in scientific computational models, being the basis for the numerical verification of the FE solutions. The text discusses the capabilities and limitations of classical methods to build error estimates which can be used to control the quality of numerical simulations and drive adaptive algorithms, with a focus on Computational Mechanics engineering applications. Fundamentals principles of residual methods, smoothing (recovery) methods, and constitutive relation error (duality based) methods are thus addressed along the manuscript. Attention is paid to recent advances and forthcoming research challenges on related topics.  The book constitutes a useful guide for students, researchers, or engineers wishing to acquire insights into state-of-the-art techniques for numerical verification.

  18. An efficient method for solving fractional Hodgkin-Huxley model

    Science.gov (United States)

    Nagy, A. M.; Sweilam, N. H.

    2014-06-01

    In this paper, we present an accurate numerical method for solving fractional Hodgkin-Huxley model. A non-standard finite difference method (NSFDM) is implemented to study the dynamic behaviors of the proposed model. The Grünwald-Letinkov definition is used to approximate the fractional derivatives. Numerical results are presented graphically reveal that NSFDM is easy to implement, effective and convenient for solving the proposed model.

  19. Exact Modeling of Cardiovascular System Using Lumped Method

    CERN Document Server

    Ghasemalizadeh, Omid; Firoozabadi, Bahar; Hassani, Kamran

    2014-01-01

    Electrical analogy (Lumped method) is an easy way to model human cardiovascular system. In this paper Lumped method is used for simulating a complete model. It describes a 36-vessel model and cardiac system of human body with details that could show hydrodynamic parameters of cardiovascular system. Also this paper includes modeling of pulmonary, atrium, left and right ventricles with their equivalent circuits. Exact modeling of right and left ventricles pressure increases the accuracy of our simulation. In this paper we show that a calculated pressure for aorta from our complex circuit is near to measured pressure by using advanced medical instruments.

  20. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  1. A mixed model reduction method for preserving selected physical information

    Science.gov (United States)

    Zhang, Jing; Zheng, Gangtie

    2017-03-01

    A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.

  2. Composite modeling method in dynamics of planar mechanical system

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper presents a composite modeling method of the forward dynamics in general planar mechanical system. In the modeling process, the system dynamic model is generated by assembling the model units which are kinematical determinate in planar mechanisms rather than the body/joint units in multi-body system. A state space formulation is employed to model both the unit and system models. The validation and feasibility of the method are illustrated by a case study of a four-bar mechanism. The advantage of this method is that the models are easier to reuse and the system is easier to reconfigure. The formulation reveals the relationship between the topology and dynamics of the planar mechanism to some extent.

  3. Composite modeling method in dynamics of planar mechanical system

    Institute of Scientific and Technical Information of China (English)

    WANG Hao; LIN ZhongQin; LAI XinMin

    2008-01-01

    This paper presents a composite modeling method of the forward dynamics in general planar mechanical system.In the modeling process,the system dynamic model is generated by assembling the model units which are kinematical determi-nate in planar mechanisms rather than the body/joint units in multi-body system.A state space formulation is employed to model both the unit and system models.The validation and feasibility of the method are illustrated by a case study of a four-bar mechanism.The advantage of this method is that the models are easier to reuse and the system is easier to reconfigure.The formulation reveals the rela-tionship between the topology and dynamics of the planar mechanism to some extent.

  4. Teaching students to apply multiple physical modeling methods

    NARCIS (Netherlands)

    Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.

    2014-01-01

    Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and

  5. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  6. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  7. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  8. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  9. Teaching students to apply multiple physical modeling methods

    NARCIS (Netherlands)

    Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.

    2014-01-01

    Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and

  10. Design methods for some dose-response models

    NARCIS (Netherlands)

    Albers, Willem/Wim; Strijbosch, Leo W.G.; Does, Ronald J.M.M.

    1990-01-01

    A recently described design method for one-parameter biomedical models such as limiting or serial dilution assays is generalized to two-parameter models for which the dose-response relationship can be expressed as a linear regression model with parameters α (intercept) and β (slope). Design formulae

  11. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf

    2010-01-01

    Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat

  12. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  13. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....

  14. Tensor renormalization group methods for spin and gauge models

    Science.gov (United States)

    Zou, Haiyuan

    The analysis of the error of perturbative series by comparing it to the exact solution is an important tool to understand the non-perturbative physics of statistical models. For some toy models, a new method can be used to calculate higher order weak coupling expansion and modified perturbation theory can be constructed. However, it is nontrivial to generalize the new method to understand the critical behavior of high dimensional spin and gauge models. Actually, it is a big challenge in both high energy physics and condensed matter physics to develop accurate and efficient numerical algorithms to solve these problems. In this thesis, one systematic way named tensor renormalization group method is discussed. The applications of the method to several spin and gauge models on a lattice are investigated. theoretically, the new method allows one to write an exact representation of the partition function of models with local interactions. E.g. O(N) models, Z2 gauge models and U(1) gauge models. Practically, by using controllable approximations, results in both finite volume and the thermodynamic limit can be obtained. Another advantage of the new method is that it is insensitive to sign problems for models with complex coupling and chemical potential. Through the new approach, the Fisher's zeros of the 2D O(2) model in the complex coupling plane can be calculated and the finite size scaling of the results agrees well with the Kosterlitz-Thouless assumption. Applying the method to the O(2) model with a chemical potential, new phase diagram of the models can be obtained. The structure of the tensor language may provide a new tool to understand phase transition properties in general.

  15. Sequence Analysis on Complete Mitochondrial Genome and Phylogeny of Microtus fortis fortis%东方田鼠指名亚种的线粒体基因组序列分析及系统进化研究

    Institute of Scientific and Technical Information of China (English)

    高骏; 倪丽菊; 孙凤萍; 王金祥; 胡建华; 高诚; 李凯; 肖君华; 周宇荀

    2013-01-01

    Objective To obtain the nucleotide sequence of complete mitochondrial genome sequence of Microtus fortis fortis to provide molecular data for the genetic structure and phylogeny research of Microtus.Fortis.Methods By means of conventional and long distance PCR and sequence with the "primer walking" strategy to obtain the complete mitochondrial genome of M.ffortis (Genbank:JF261174).The phylogenetic tree was constructed based on Cyt b gene to investigate the phylogenetic position of M.f.fortis.Results The length of mitochondrial genome of M.f.fortis is 16312bp,include 13 protein coding genes,2 ribosomal RNAs,22 transfer RNAs and one major noncoding region (CR region).The extended termination associated sequences (ETAS-1 and ETAS-2),conserved sequence block 1 (CSB-1) and a Poly(C)10 section were found in the CR region.The putative origin of replication for the light strand (OL)of M.ffortis showed high conservation in stem and adjacent sequences,but the difference existed in the loop region among different species and subspecies.Phylogenetic analysis results based on the cytochrome b gene showed the closest phylogenetic relationship with Microtus middendorffi in the genus Microtus.Conclusion The mitochondrial genome sequence of M.f.fortis showed a typical vertebrate pattern.This study can provide useful molecular data for the further study about phylogenic relationships and subspecies taxonomy of M.fortis in the future%目的 获得东方田鼠指名亚种的线粒体基因组序列,为东方田鼠的遗传结构和系统进化研究提供分子数据.方法 照近缘的台湾田鼠的线粒体基因组序列(Microtus kikuchii,NC_003041.1)设计9对引物,利用传统和长距离PCR结合引物步移的策略,首次完成了东方田鼠指名亚种的线粒体基因组测序(Genbank:JF261174),并对该物种的线粒体基因组序列进行了分析,并利用线粒体基因组上的细胞色素b(Cytb)基因序列构建系统进化树,探讨中国东方田

  16. MODERN MODELS AND METHODS OF DIAGNOSIS OF METHODOLOGY COMPETENT TEACHERS

    Directory of Open Access Journals (Sweden)

    Loyko V. I.

    2016-06-01

    Full Text Available The purpose of the research is development of models and methods of diagnostics of methodical competence of a teacher. According to modern views, methodical thinking is the key competence of teachers. Modern experts consider the methodical competence of a teacher as a personal and professional quality, which is a fundamentally important factor in the success of the professional activity of teachers, as well as a subsystem of its professional competence. This is due to the fact that in today's world, a high level of knowledge of teachers of academic subjects and their possessing of learnt basics of teaching methods can not fully describe the level of professional competence of the teacher. The authors have characterized the functional components of methodical competence of the teacher, its relationship with other personalprofessional qualities (first - to the psychological and educational, research and informational competence, as well as its levels of formation. Forming a model of methodical competence of the teacher, the authors proceeded from the fact that a contemporary teacher high demands: it must be ready to conduct independent research, design-learning technologies, forecasting results of training and education of students. As a leading component of the methodical competence of the teacher is his personal experience in methodological activities and requirements of methodical competence determined goals and objectives of methodical activity, the process of the present study, the formation of patterns of methodical competence of the teacher preceded the refinement of existing models methodical activity of scientific and pedagogical staff of higher education institutions and secondary vocational education institutions. The proposed model of methodical competence of the teacher - the scientific basis of a system of monitoring of his personal and professional development, and evaluation criteria and levels of her diagnosis - targets system of

  17. Hybrid ODE/SSA methods and the cell cycle model

    Science.gov (United States)

    Wang, S.; Chen, M.; Cao, Y.

    2017-07-01

    Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.

  18. Model refinements of transformers via a subproblem finite element method

    OpenAIRE

    Dular, Patrick; Kuo-Peng, Patrick; Ferreira Da Luz, Mauricio,; Krähenbühl, Laurent

    2015-01-01

    International audience; A progressive modeling of transformers is performed via a subproblem finite element method. A complete problem is split into subproblems with different adapted overlapping meshes. Model refinements are performed from ideal to real flux tubes, 1-D to 2-D to 3-D models, linear to nonlinear materials, perfect to real materials, single wire to volume conductor windings, and homogenized to fine models of cores and coils, with any coupling of these changes. The proposed unif...

  19. Interval Methods for Model Qualification: Methodology and Advanced Application

    OpenAIRE

    Alexandre dit Sandretto, Julien; Trombettoni, Gilles; Daney, David

    2012-01-01

    It is often too complex to use, and sometimes impossible to obtain, an actual model in simulation or command field . To handle a system in practice, a simplification of the real model is then necessary. This simplification goes through some hypotheses made on the system or the modeling approach. In this paper, we deal with all models that can be expressed by real-valued variables involved in analytical relations and depending on parameters. We propose a method that qualifies the simplificatio...

  20. A simple method for modeling dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Son, Min-Kyu [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of); Seo, Hyunwoong [Graduate School of Information Science and Electrical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan); Center of Plasma Nano-interface Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395 (Japan); Lee, Kyoung-Jun; Kim, Soo-Kyoung; Kim, Byung-Man; Park, Songyi; Prabakar, Kandasamy [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of); Kim, Hee-Je, E-mail: heeje@pusan.ac.kr [Department of Electrical Engineering, Pusan National University, San 30, Jangjeon-Dong, Geumjeong-Gu, Busan, 609-735 (Korea, Republic of)

    2014-03-03

    Dye-sensitized solar cells (DSCs) are photoelectrochemical photovoltaics based on complicated electrochemical reactions. The modeling and simulation of DSCs are powerful tools for evaluating the performance of DSCs according to a range of factors. Many theoretical methods are used to simulate DSCs. On the other hand, these methods are quite complicated because they are based on a difficult mathematical formula. Therefore, this paper suggests a simple and accurate method for the modeling and simulation of DSCs without complications. The suggested simulation method is based on extracting the coefficient from representative cells and a simple interpolation method. This simulation method was implemented using the power electronic simulation program and C-programming language. The performance of DSCs according to the TiO{sub 2} thickness was simulated, and the simulated results were compared with the experimental data to confirm the accuracy of this simulation method. The suggested modeling strategy derived the accurate current–voltage characteristics of the DSCs according to the TiO{sub 2} thickness with good agreement between the simulation and the experimental results. - Highlights: • Simple modeling and simulation method for dye-sensitized solar cells (DSCs). • Modeling done using a power electronic simulation program and C-programming language. • The performance of DSC according to the TiO{sub 2} thickness was simulated. • Simulation and experimental performance of DSCs were compared. • This method is suitable for accurate simulation of DSCs.

  1. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  2. Forty years of working with corpora: from Ibsen to Twitter, and beyond

    Directory of Open Access Journals (Sweden)

    Knut Hofland

    2013-04-01

    Full Text Available We provide an overview of forty years of work with language corpora by the research group that started in 1972 as the Norwegian Computing Centre for the Humanities. A brief history highlights major corpora and tools that have been developed in numerous collaborations, including corpora of literature, dialect recordings, learner language, parallel texts, newspaper articles, blog posts and tweets. Current activities are also described, with a focus on corpus analysis tools, treebanks and social media analysis. Keywords: corpus building; corpus analysis tools; treebanks; social media analysis

  3. Forty years of Clar’s aromatic pi-sextet rule

    Directory of Open Access Journals (Sweden)

    Miquel eSolà

    2013-10-01

    Full Text Available In 1972 Erich Clar formulated his aromatic pi-sextet rule that allows discussing qualitatively the aromatic character of benzenoid species. Now, forty years later, Clar’s aromatic pi-sextet rule is still a source of inspiration for many chemists. This simple rule has been validated both experimentally and theoretically. In this review, we select some particular examples to highlight the achievement of Clar’s aromatic pi-sextet rule in many situations and we discuss two recent successful cases of its application.

  4. Forty-five years of split-brain research and still going strong.

    Science.gov (United States)

    Gazzaniga, Michael S

    2005-08-01

    Forty-five years ago, Roger Sperry, Joseph Bogen and I embarked on what are now known as the modern split-brain studies. These experiments opened up new frontiers in brain research and gave rise to much of what we know about hemispheric specialization and integration. The latest developments in split-brain research build on the groundwork laid by those early studies. Split-brain methodology, on its own and in conjunction with neuroimaging, has yielded insights into the remarkable regional specificity of the corpus callosum as well as into the integrative role of the callosum in the perception of causality and in our perception of an integrated sense of self.

  5. Forty-five Cases of Apoplexy Treated by Electroacupuncture at the Points of Yin Meridians

    Institute of Scientific and Technical Information of China (English)

    李静铭

    2001-01-01

    Forty-five cases of apoplexy were treated by electroacupuncture only at the points of Yin Meridians (i.e. the Hand- and Foot-Taiyin Meridians), and the other 30 cases as the controls were treated only at the points of Yang Meridians (i.e. the Hand- and Foot-Yangming Meridians). The total effective rate was 91.1% in the former and 86.7% in the latter, with no statistically significant difference between the two groups, indicating that acupuncture only at the points of Yin-Meridians is also an effective therapy for apoplexy.

  6. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  7. Quasi-Monte Carlo methods for the Heston model

    OpenAIRE

    Jan Baldeaux; Dale Roberts

    2012-01-01

    In this paper, we discuss the application of quasi-Monte Carlo methods to the Heston model. We base our algorithms on the Broadie-Kaya algorithm, an exact simulation scheme for the Heston model. As the joint transition densities are not available in closed-form, the Linear Transformation method due to Imai and Tan, a popular and widely applicable method to improve the effectiveness of quasi-Monte Carlo methods, cannot be employed in the context of path-dependent options when the underlying pr...

  8. Compositions and methods for modeling Saccharomyces cerevisiae metabolism

    DEFF Research Database (Denmark)

    2012-01-01

    The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S. cerevisiae reactants to a plurality of S. cerevisiae reactions, a constraint set for the plurality of S. cerevisiae reactions......, and commands for determining a distribution of flux through the reactions that is predictive of a S. cerevisiae physiological function. A model of the invention can further include a gene database containing information characterizing the associated gene or genes. The invention further provides methods...... for making an in silica S. cerevisiae model and methods for determining a S. cerevisiae physiological function using a model of the invention. The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S...

  9. Finite Element Model Updating Using Response Surface Method

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes the response surface method for finite element model updating. The response surface method is implemented by approximating the finite element model surface response equation by a multi-layer perceptron. The updated parameters of the finite element model were calculated using genetic algorithm by optimizing the surface response equation. The proposed method was compared to the existing methods that use simulated annealing or genetic algorithm together with a full finite element model for finite element model updating. The proposed method was tested on an unsymmetri-cal H-shaped structure. It was observed that the proposed method gave the updated natural frequen-cies and mode shapes that were of the same order of accuracy as those given by simulated annealing and genetic algorithm. Furthermore, it was observed that the response surface method achieved these results at a computational speed that was more than 2.5 times as fast as the genetic algorithm and a full finite element model and 24 ti...

  10. Evaluation of bias correction methods for wave modeling output

    Science.gov (United States)

    Parker, K.; Hill, D. F.

    2017-02-01

    Models that seek to predict environmental variables invariably demonstrate bias when compared to observations. Bias correction (BC) techniques are common in the climate and hydrological modeling communities, but have seen fewer applications to the field of wave modeling. In particular there has been no investigation as to which BC methodology performs best for wave modeling. This paper introduces and compares a subset of BC methods with the goal of clarifying a "best practice" methodology for application of BC in studies of wave-related processes. Specific focus is paid to comparing parametric vs. empirical methods as well as univariate vs. bivariate methods. The techniques are tested on global WAVEWATCH III historic and future period datasets with comparison to buoy observations at multiple locations. Both wave height and period are considered in order to investigate BC effects on inter-variable correlation. Results show that all methods perform uniformly in terms of correcting statistical moments for individual variables with the exception of a copula based method underperforming for wave period. When comparing parametric and empirical methods, no difference is found. Between bivariate and univariate methods, results show that bivariate methods greatly improve inter-variable correlations. Of the bivariate methods tested the copula based method is found to be not as effective at correcting correlation while a "shuffling" method is unable to handle changes in correlation from historic to future periods. In summary, this study demonstrates that BC methods are effective when applied to wave model data and that it is essential to employ methods that consider dependence between variables.

  11. Random weighting method for Cox’s proportional hazards model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Variance of parameter estimate in Cox’s proportional hazards model is based on asymptotic variance. When sample size is small, variance can be estimated by bootstrap method. However, if censoring rate in a survival data set is high, bootstrap method may fail to work properly. This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations. This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model. This method, unlike the bootstrap method, does not lead to more severe censoring than the original sample does. Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions. Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.

  12. Random weighting method for Cox's proportional hazards model

    Institute of Scientific and Technical Information of China (English)

    CUI WenQuan; LI Kai; YANG YaNing; WU YueHua

    2008-01-01

    Variance of parameter estimate in Cox's proportional hazards model is based on asymptotic variance.When sample size is small,variance can be estimated by bootstrap method.However,if censoring rate in a survival data set is high,bootstrap method may fail to work properly.This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations.This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model.This method,unlike the bootstrap method,does not lead to more severe censoring than the original sample does.Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions.Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.

  13. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate...... prediction of outputs. This article presents an overview of some of the most popular clustering methods, namely Fuzzy Cluster-Means (FCM) and its generalizations to Fuzzy C-Lines and Elliptotypes. The algorithms for computing cluster centers and principal directions from a training data-set are described....... A method to obtain an optimized number of clusters is outlined. Based upon the cluster's characteristics, a behavioural model is formulated in terms of a rule-base and an inference engine. The article reviews several variants for the model formulation. Some limitations of the methods are listed...

  14. Analysis of Dynamic Modeling Method Based on Boundary Element

    Directory of Open Access Journals (Sweden)

    Xu-Sheng Gan

    2013-07-01

    Full Text Available The aim of this study was to study an improved dynamic modeling method based on a Boundary Element Method (BEM. The dynamic model was composed of the elements such as the beam element, plate element, joint element, lumped mass and spring element by the BEM. An improved dynamic model of a machine structure was established based on plate-beam element system mainly. As a result, the dynamic characteristics of a machine structure were analyzed and the comparison of computational results and experimental’s showed the modeling method was effective. The analyses indicate that the introduced method inaugurates a good way for analyzing dynamic characteristics of a machine structure efficiently.

  15. Cavity method in the spherical Sherrington-Kirkpatrick model

    OpenAIRE

    Panchenko, Dmitry

    2006-01-01

    We develop a cavity method in the spherical Sherrington-Kirkpatrick model at high temperature and small external field. As one application we compute the limit of the covariance matrix for fluctuations of the overlap and magnetization.

  16. Discrete Event Simulation Modeling of Radiation Medicine Delivery Methods

    Energy Technology Data Exchange (ETDEWEB)

    Paul M. Lewis; Dennis I. Serig; Rick Archer

    1998-12-31

    The primary objective of this work was to evaluate the feasibility of using discrete event simulation (DES) modeling to estimate the effects on system performance of changes in the human, hardware, and software elements of radiation medicine delivery methods.

  17. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  18. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  19. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  20. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is consider

  1. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  2. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  3. Stochastic Analysis Method of Sea Environment Simulated by Numerical Models

    Institute of Scientific and Technical Information of China (English)

    刘德辅; 焦桂英; 张明霞; 温书勤

    2003-01-01

    This paper proposes the stochastic analysis method of sea environment simulated by numerical models, such as wave height, current field, design sea levels and longshore sediment transport. Uncertainty and sensitivity analysis of input and output factors of numerical models, their long-term distribution and confidence intervals are described in this paper.

  4. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  5. A Method for Inducing Process Models from Qualitative Data.

    Science.gov (United States)

    Wildemuth, Barbara M.

    1990-01-01

    Describes a method for inducing a theoretical model of a series of events that occur in an ongoing process. A data analysis method that transforms narrative data into event histories is explained, a spreadsheet display that compares event histories is described, and future research is suggested. (17 references) (LRW)

  6. Semiparametric modeling and analysis of longitudinal method comparison data.

    Science.gov (United States)

    Rathnayake, Lasitha N; Choudhary, Pankaj K

    2017-02-19

    Studies comparing two or more methods of measuring a continuous variable are routinely conducted in biomedical disciplines with the primary goal of measuring agreement between the methods. Often, the data are collected by following a cohort of subjects over a period of time. This gives rise to longitudinal method comparison data where there is one observation trajectory for each method on every subject. It is not required that observations from all methods be available at each observation time. The multiple trajectories on the same subjects are dependent. We propose modeling the trajectories nonparametrically through penalized regression splines within the framework of mixed-effects models. The model also uses random effects of subjects and their interactions to capture dependence in observations from the same subjects. It additionally allows the within-subject errors of each method to be correlated. It is fit using the method of maximum likelihood. Agreement between the methods is evaluated by performing inference on measures of agreement, such as concordance correlation coefficient and total deviation index, which are functions of parameters of the assumed model. Simulations indicate that the proposed methodology performs reasonably well for 30 or more subjects. Its application is illustrated by analyzing a dataset of percentage body fat measurements. Copyright © 2017 John Wiley & Sons, Ltd.

  7. A least squares estimation method for the linear learning model

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1978-01-01

    textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a

  8. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  9. A review of propeller modelling techniques based on Euler methods

    NARCIS (Netherlands)

    Zondervan, G.J.D.

    1998-01-01

    Future generation civil aircraft will be powered by new, highly efficient propeller propulsion systems. New, advanced design tools like Euler methods will be needed in the design process of these aircraft. This report describes the application of Euler methods to the modelling of flowfields generate

  10. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  11. ALTERNATING DIRECTION FINITE ELEMENT METHOD FOR SOME REACTION DIFFUSION MODELS

    Institute of Scientific and Technical Information of China (English)

    江成顺; 刘蕴贤; 沈永明

    2004-01-01

    This paper is concerned with some nonlinear reaction - diffusion models. To solve this kind of models, the modified Laplace finite element scheme and the alternating direction finite element scheme are established for the system of patrical differential equations. Besides, the finite difference method is utilized for the ordinary differential equation in the models. Moreover, by the theory and technique of prior estimates for the differential equations, the convergence analyses and the optimal L2- norm error estimates are demonstrated.

  12. ABOUT A MODELING METHOD OF AN AUGER GEAR IN SOLIDWORKS

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2016-12-01

    Full Text Available In this paperwork is presented a method used in SOLIDWORKS for modeling special items as auger gear and the steps to be taken in order to obtain a better design. There are presented the features that are used for modeling, and then the steps that must be taken in order to obtain the 3D model of a coil and the whole auger gear and also the unfolded coil for subsequent sheet metal cutting.

  13. Method of product portfolio analysis based on optimization models

    Directory of Open Access Journals (Sweden)

    V.M. Lozyuk

    2011-12-01

    Full Text Available The research is devoted to optimization of the structure of product portfolio of trading company with using the principles of the investment modeling. We further developed the models of investment portfolio optimization, using the known Markowitz and Sharp methods to determine the optimal portfolio of trade company. Adapted to the goods market the models in this study could be applied to the business of trade companies.

  14. Online prediction model based on the SVD-KPCA method.

    Science.gov (United States)

    Elaissi, Ilyes; Jaffel, Ines; Taouali, Okba; Messaoud, Hassani

    2013-01-01

    This paper proposes a new method for online identification of a nonlinear system modelled on Reproducing Kernel Hilbert Space (RKHS). The proposed SVD-KPCA method uses the Singular Value Decomposition (SVD) technique to update the principal components. Then we use the Reduced Kernel Principal Component Analysis (RKPCA) to approach the principal components which represent the observations selected by the KPCA method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Dan; LI Chengrong; WANG Zhongdong

    2013-01-01

    Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ~ 1 000 and a censoring rate of 90%,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90% confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90% confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.

  16. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    Science.gov (United States)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results

  17. Projection methods for the numerical solution of Markov chain models

    Science.gov (United States)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  18. Solving Volterra's Population Model Using New Second Derivative Multistep Methods

    Directory of Open Access Journals (Sweden)

    K. Parand

    2008-01-01

    Full Text Available In this study new second derivative multistep methods (denoted SDMM are used to solve Volterra's model for population growth of a species within a closed system. This model is a nonlinear integro-differential where the integral term represents the effect of toxin. This model is first converted to a nonlinear ordinary differential equation and then the new SDMM, which has good stability and accuracy properties, are applied to solve this equation. We compare this method with the others and show that new SDMM gives excellent results.

  19. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  20. An improved optimal elemental method for updating finite element models

    Institute of Scientific and Technical Information of China (English)

    Duan Zhongdong(段忠东); Spencer B.F.; Yan Guirong(闫桂荣); Ou Jinping(欧进萍)

    2004-01-01

    The optimal matrix method and optimal elemental method used to update finite element models may not provide accurate results. This situation occurs when the test modal model is incomplete, as is often the case in practice. An improved optimal elemental method is presented that defines a new objective function, and as a byproduct, circumvents the need for mass normalized modal shapes, which are also not readily available in practice. To solve the group of nonlinear equations created by the improved optimal method, the Lagrange multiplier method and Matlab function fmincon are employed. To deal with actual complex structures,the float-encoding genetic algorithm (FGA) is introduced to enhance the capability of the improved method. Two examples, a 7-degree of freedom (DOF) mass-spring system and a 53-DOF planar frame, respectively, are updated using the improved method.Thc example results demonstrate the advantages of the improved method over existing optimal methods, and show that the genetic algorithm is an effective way to update the models used for actual complex structures.

  1. Fast and stable numerical method for neuronal modelling

    Science.gov (United States)

    Hashemi, Soheil; Abdolali, Ali

    2016-11-01

    Excitable cell modelling is of a prime interest in predicting and targeting neural activity. Two main limits in solving related equations are speed and stability of numerical method. Since there is a tradeoff between accuracy and speed, most previously presented methods for solving partial differential equations (PDE) are focused on one side. More speed means more accurate simulations and therefore better device designing. By considering the variables in finite differenced equation in proper time and calculating the unknowns in the specific sequence, a fast, stable and accurate method is introduced in this paper for solving neural partial differential equations. Propagation of action potential in giant axon is studied by proposed method and traditional methods. Speed, consistency and stability of the methods are compared and discussed. The proposed method is as fast as forward methods and as stable as backward methods. Forward methods are known as fastest methods and backward methods are stable in any circumstances. Complex structures can be simulated by proposed method due to speed and stability of the method.

  2. Probability of detection models for eddy current NDE methods

    Energy Technology Data Exchange (ETDEWEB)

    Rajesh, S.N.

    1993-04-30

    The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.

  3. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  4. A cumulative entropy method for distribution recognition of model error

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2015-02-01

    This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.

  5. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    scenario was modelled using Markovian models. The Ordinary Differential Equations arising from these models were solved numerically. The results obtained seemed very similar to those obtained using a different method in previous work by Akinpelu & Skoog 1985. Recent measurement studies of packet traffic...... process. A heuristic formula for the tail behaviour of a single server queue fed by a superposition of renewal processes has been evaluated. The evaluation was performed by applying Matrix Analytic methods. The heuristic formula has applications in the Call Admission Control (CAC) procedure of the future...... network services i.e. 800 and 900 calls and advanced mobile communication services. The Markovian Arrival Process (MAP) has been used as a versatile tool to model the packet arrival process. Applying the MAP facilitates the use of Matrix Analytic methods to obtain performance measures associated...

  6. Generalized framework for context-specific metabolic model extraction methods

    Directory of Open Access Journals (Sweden)

    Semidán eRobaina Estévez

    2014-09-01

    Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.

  7. Extraction method for parasitic capacitances and inductances of HEMT models

    Science.gov (United States)

    Zhang, HengShuang; Ma, PeiJun; Lu, Yang; Zhao, BoChao; Zheng, JiaXin; Ma, XiaoHua; Hao, Yue

    2017-03-01

    A new method to extract parasitic capacitances and inductances for high electron-mobility transistors (HEMTs) is proposed in this paper. Compared with the conventional extraction method, the depletion layer is modeled as a physically significant capacitance model and the extrinsic values obtained are much closer to the actual results. In order to simulate the high frequency behaviour with higher precision, series parasitic inductances are introduced into the cold pinch-off model which is used to extract capacitances at low frequency and the reactive elements can be determined simultaneously over the measured frequency range. The values obtained by this method can be used to establish a 16-elements small-signal equivalent circuit model under different bias conditions. The results show good agreements between the simulated and measured scattering parameters up to 30 GHz.

  8. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  9. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  10. FINITE VOLUME METHOD OF MODELLING TRANSIENT GROUNDWATER FLOW

    Directory of Open Access Journals (Sweden)

    N. Muyinda

    2014-01-01

    Full Text Available In the field of computational fluid dynamics, the finite volume method is dominant over other numerical techniques like the finite difference and finite element methods because the underlying physical quantities are conserved at the discrete level. In the present study, the finite volume method is used to solve an isotropic transient groundwater flow model to obtain hydraulic heads and flow through an aquifer. The objective is to discuss the theory of finite volume method and its applications in groundwater flow modelling. To achieve this, an orthogonal grid with quadrilateral control volumes has been used to simulate the model using mixed boundary conditions from Bwaise III, a Kampala Surburb. Results show that flow occurs from regions of high hydraulic head to regions of low hydraulic head until a steady head value is achieved.

  11. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  12. A New Method for Grey Forecasting Model Group

    Institute of Scientific and Technical Information of China (English)

    李峰; 王仲东; 宋中民

    2002-01-01

    In order to describe the characteristics of some systems, such as the process of economic and product forecasting, a lot of discrete data may be used. Although they are discrete, the inside law can be-founded by some methods. For a series that the discrete degree is large and the integrated tendency is ascending, a new method for grey forecasting model group is given by the grey system theory. The method is that it firstly transforms original data, chooses some clique values and divides original data into groups by different clique values; then, it establishes non-equigap GM(1, 1) model for different groups and searches forecasting area of original data by the solution of model. At the end of the paper, the result of reliability of forecasting value is obtained. It is shown that the method is feasible.

  13. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  14. Similar Constructive Method for Solving a nonlinearly Spherical Percolation Model

    Directory of Open Access Journals (Sweden)

    WANG Yong

    2013-01-01

    Full Text Available In the view of nonlinear spherical percolation problem of dual porosity reservoir, a mathematical model considering three types of outer boundary conditions: closed, constant pressure, infinity was established in this paper. The mathematical model was linearized by substitution of variable and became a boundary value problem of ordinary differential equation in Laplace space by Laplace transformation. It was verified that such boundary value problem with one type of outer boundary had a similar structure of solution. And a new method: Similar Constructive Method was obtained for solving such boundary value problem. By this method, solutions with similar structure in other two outer boundary conditions were obtained. The Similar Constructive Method raises efficiency of solving such percolation model.

  15. Modeling of composite piezoelectric structures with the finite volume method.

    Science.gov (United States)

    Bolborici, Valentin; Dawson, Francis P; Pugh, Mary C

    2012-01-01

    Piezoelectric devices, such as piezoelectric traveling- wave rotary ultrasonic motors, have composite piezoelectric structures. A composite piezoelectric structure consists of a combination of two or more bonded materials, at least one of which is a piezoelectric transducer. Piezoelectric structures have mainly been numerically modeled using the finite element method. An alternative approach based on the finite volume method offers the following advantages: 1) the ordinary differential equations resulting from the discretization process can be interpreted directly as corresponding circuits; and 2) phenomena occurring at boundaries can be treated exactly. This paper presents a method for implementing the boundary conditions between the bonded materials in composite piezoelectric structures modeled with the finite volume method. The paper concludes with a modeling example of a unimorph structure.

  16. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  17. NEW METHOD FOR LOW ORDER SPECTRAL MODEL AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to overcome the deficiency in classical method of low order spectral model, a new method for low order spectral model was advanced. Through calculating the multiple correlation coefficients between combinations of different functions and the recorded data under the least square criterion, the truncated functions which can mostly reflect the studied physical phenomenon were objectively distilled from these data. The new method overcomes the deficiency of artificially selecting the truncated functions in the classical low order spectral model. The new method being applied to study the inter-annual variation of summer atmospheric circulation over Northern Hemisphere, the truncated functions were obtained with the atmospheric circulation data of June 1994 and June 1998. The mechanisms for the two-summer atmospheric circulation variations over Northern Hemisphere were obtained with two-layer quasi-geostrophic baroclinic equation.

  18. Estimation of pump operational state with model-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)

    2010-06-15

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)

  19. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  20. A Coupling Model of the Discontinuous Deformation Analysis Method and the Finite Element Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ming; YANG Heqing; LI Zhongkui

    2005-01-01

    Neither the finite element method nor the discontinuous deformation analysis method can solve problems very well in rock mechanics and engineering due to their extreme complexities. A coupling method combining both of them should have wider applicability. Such a model coupling the discontinuous deformation analysis method and the finite element method is proposed in this paper. In the model, so-called line blocks are introduced to deal with the interaction via the common interfacial boundary of the discontinuous deformation analysis domain with the finite element domain. The interfacial conditions during the incremental iteration process are satisfied by means of the line blocks. The requirement of gradual small displacements in each incremental step of this coupling method is met through a displacement control procedure. The model is simple in concept and is easy in numerical implementation. A numerical example is given. The displacement obtained by the coupling method agrees well with those obtained by the finite element method, which shows the rationality of this model and the validity of the implementation scheme.

  1. Lattice Boltzmann Method Simulation of 3-D Melting Using Double MRT Model with Interfacial Tracking Method

    CERN Document Server

    Li, Zheng; Zhang, Yuwen

    2016-01-01

    Three-dimensional melting problems are investigated numerically with Lattice Boltzmann method (LBM). Regarding algorithm's accuracy and stability, Multiple-Relaxation-Time (MRT) models are employed to simplify the collision term in LBM. Temperature and velocity fields are solved with double distribution functions, respectively. 3-D melting problems are solved with double MRT models for the first time in this article. The key point for the numerical simulation of a melting problem is the methods to obtain the location of the melting front and this article uses interfacial tracking method. The interfacial tracking method combines advantages of both deforming and fixed grid approaches. The location of the melting front was obtained by calculating the energy balance at the solid-liquid interface. Various 3-D conduction controlled melting problems are solved firstly to verify the numerical method. Liquid fraction tendency and temperature distribution obtained from numerical methods agree with the analytical result...

  2. Coarse Analysis of Microscopic Models using Equation-Free Methods

    DEFF Research Database (Denmark)

    Marschler, Christian

    -dimensional models. The goal of this thesis is to investigate such high-dimensional multiscale models and extract relevant low-dimensional information from them. Recently developed mathematical tools allow to reach this goal: a combination of so-called equation-free methods with numerical bifurcation analysis....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...... factor for the complexity of models, e.g., in real-time applications. With the increasing amount of data generated by computer simulations a challenge is to extract valuable information from the models in order to help scientists and managers in a decision-making process. Although the dynamics...

  3. Methods improvements incorporated into the SAPHIRE ASP models

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others

    1995-04-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.

  4. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  5. Finite difference methods for coupled flow interaction transport models

    Directory of Open Access Journals (Sweden)

    Shelly McGee

    2009-04-01

    Full Text Available Understanding chemical transport in blood flow involves coupling the chemical transport process with flow equations describing the blood and plasma in the membrane wall. In this work, we consider a coupled two-dimensional model with transient Navier-Stokes equation to model the blood flow in the vessel and Darcy's flow to model the plasma flow through the vessel wall. The advection-diffusion equation is coupled with the velocities from the flows in the vessel and wall, respectively to model the transport of the chemical. The coupled chemical transport equations are discretized by the finite difference method and the resulting system is solved using the additive Schwarz method. Development of the model and related analytical and numerical results are presented in this work.

  6. Evaluation of methods for modeling transcription-factor sequence specificity

    Science.gov (United States)

    Weirauch, Matthew T.; Cote, Atina; Norel, Raquel; Annala, Matti; Zhao, Yue; Riley, Todd R.; Saez-Rodriguez, Julio; Cokelaer, Thomas; Vedenko, Anastasia; Talukder, Shaheynoor; Bussemaker, Harmen J.; Morris, Quaid D.; Bulyk, Martha L.; Stolovitzky, Gustavo

    2013-01-01

    Genomic analyses often involve scanning for potential transcription-factor (TF) binding sites using models of the sequence specificity of DNA binding proteins. Many approaches have been developed to model and learn a protein’s binding specificity, but these methods have not been systematically compared. Here we applied 26 such approaches to in vitro protein binding microarray data for 66 mouse TFs belonging to various families. For 9 TFs, we also scored the resulting motif models on in vivo data, and found that the best in vitro–derived motifs performed similarly to motifs derived from in vivo data. Our results indicate that simple models based on mononucleotide position weight matrices learned by the best methods perform similarly to more complex models for most TFs examined, but fall short in specific cases (<10%). In addition, the best-performing motifs typically have relatively low information content, consistent with widespread degeneracy in eukaryotic TF sequence preferences. PMID:23354101

  7. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  8. Robustness of Control System Tuned by Multiple Dominant Pole Method and Desired Model Method

    Directory of Open Access Journals (Sweden)

    Miloslav SPURNÝ

    2011-06-01

    Full Text Available In the article two analytical analog controller PI tuning methods are shortly described and compared from the point of view of the control system robustness for the first order plus time delay plant. For comparison the multiple dominant pole method and the desired model method were chosen.The program Matlab/Simulink for verification of the control system robustness was used.

  9. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  10. Coarse graining methods for spin net and spin foam models

    CERN Document Server

    Dittrich, Bianca; Martin-Benito, Mercedes

    2011-01-01

    We undertake first steps in making a class of discrete models of quantum gravity, spin foams, accessible to a large scale analysis by numerical and computational methods. In particular, we apply Migdal-Kadanoff and Tensor Network Renormalization schemes to spin net and spin foam models based on finite Abelian groups and introduce `cutoff models' to probe the fate of gauge symmetries under various such approximated renormalization group flows. For the Tensor Network Renormalization analysis, a new Gauss constraint preserving algorithm is introduced to improve numerical stability and aid physical interpretation. We also describe the fixed point structure and establish an equivalence of certain models.

  11. [Surgical treatment of temporal lobe epilepsy: a series of forty-three cases analysis].

    Science.gov (United States)

    Meneses, Murilo S; Rocha, Samanta B; Kowacs, Pedro A; Andrade, Nelson O; Santos, Heraldo L; Narata, Ana Paula; Bacchi, Ana Paula; Silva, Erasmo B; Simão, Cristiane; Hunhevicz, Sonival C

    2005-09-01

    Forty-three patients with epilepsy resistant to drug therapy were submitted to temporal lobe epilepsy surgery at the Instituto de Neurologia de Curitiba, from 1998 to 2003. Thirty-nine patients (90.6%) had mesial temporal sclerosis, and four had brain tumors. According to Engel's rating, 83.7% from 37 patients with complete postoperative evaluation were classified as Class I (free of disabling seizure). Postoperative complications (18.6%) were evaluated, with one case of surgical wound infection, one case of hydrocephalus, one case of cerebrospinal fluid fistula, two cases of transient palsy of the trochlear nerve and one case of transient hemiparesis. No death related to epilepsy surgery was found in our study.

  12. Bilateral Diabetic Knee Neuroarthropathy in a Forty-Year-Old Patient

    Directory of Open Access Journals (Sweden)

    Patrick Goetti

    2016-01-01

    Full Text Available Diabetic osteoarthropathy is a rare cause of neuropathic joint disease of the knee; bilateral involvement is even more exceptional. Diagnosis is often made late due to its unspecific symptoms and appropriate surgical management still needs to be defined, due to lack of evidence because of the disease’s low incidence. We report the case of a forty-year-old woman with history of diabetes type I who developed bilateral destructive Charcot knee arthropathy. Bilateral total knee arthroplasty was performed in order to achieve maximal functional outcome. Follow-up was marked by bilateral tibial periprosthetic fractures treated by osteosynthesis with a satisfactory outcome. The diagnosis of Charcot arthropathy should always be in mind when dealing with atraumatic joint destruction in diabetic patients. Arthroplasty should be considered as an alternative to arthrodesis in bilateral involvement in young patients.

  13. Bilateral Diabetic Knee Neuroarthropathy in a Forty-Year-Old Patient

    Science.gov (United States)

    Gallusser, Nicolas; Borens, Olivier

    2016-01-01

    Diabetic osteoarthropathy is a rare cause of neuropathic joint disease of the knee; bilateral involvement is even more exceptional. Diagnosis is often made late due to its unspecific symptoms and appropriate surgical management still needs to be defined, due to lack of evidence because of the disease's low incidence. We report the case of a forty-year-old woman with history of diabetes type I who developed bilateral destructive Charcot knee arthropathy. Bilateral total knee arthroplasty was performed in order to achieve maximal functional outcome. Follow-up was marked by bilateral tibial periprosthetic fractures treated by osteosynthesis with a satisfactory outcome. The diagnosis of Charcot arthropathy should always be in mind when dealing with atraumatic joint destruction in diabetic patients. Arthroplasty should be considered as an alternative to arthrodesis in bilateral involvement in young patients. PMID:27668112

  14. History of wheat cultivars released by Embrapa in forty years of research

    Directory of Open Access Journals (Sweden)

    Eduardo Caierão

    2014-11-01

    Full Text Available In forty years of genetic breeding of wheat, Embrapa (Brazilian Agricultural Research Corporation has developed over a hundred new cultivars for different regions of Brazil. Information regarding identification of these cultivars is often requested from Embrapa breeders. Data on year of release, name of pre-commercial line, the cross made, and the company unit responsible for indication of the cultivar are not always easily accessible and are often scattered throughout different documents. The aim of this study was to conduct a historical survey of all the wheat cultivars released by Embrapa, aggregating the information in a single document. Since 1974, Embrapa has released 112 wheat cultivars, including 12 by Embrapa Soybean - CNPSo (Londrina, PR, 14 by Embrapa Cerrado - CPAC (Brasília, DF, 9 by Embrapa Agropecuária Oeste - CPAO (Dourados, MS, and 77 by Embrapa Wheat - CNPT (Passo Fundo, RS.

  15. Ecoimmunity in Darwin's finches: invasive parasites trigger acquired immunity in the medium ground finch (Geospiza fortis.

    Directory of Open Access Journals (Sweden)

    Sarah K Huber

    Full Text Available BACKGROUND: Invasive parasites are a major threat to island populations of animals. Darwin's finches of the Galápagos Islands are under attack by introduced pox virus (Poxvirus avium and nest flies (Philornis downsi. We developed assays for parasite-specific antibody responses in Darwin's finches (Geospiza fortis, to test for relationships between adaptive immune responses to novel parasites and spatial-temporal variation in the occurrence of parasite pressure among G. fortis populations. METHODOLOGY/PRINCIPAL FINDINGS: We developed enzyme-linked immunosorbent assays (ELISAs for the presence of antibodies in the serum of Darwin's finches specific to pox virus or Philornis proteins. We compared antibody levels between bird populations with and without evidence of pox infection (visible lesions, and among birds sampled before nesting (prior to nest-fly exposure versus during nesting (with fly exposure. Birds from the Pox-positive population had higher levels of pox-binding antibodies. Philornis-binding antibody levels were higher in birds sampled during nesting. Female birds, which occupy the nest, had higher Philornis-binding antibody levels than males. The study was limited by an inability to confirm pox exposure independent of obvious lesions. However, the lasting effects of pox infection (e.g., scarring and lost digits were expected to be reliable indicators of prior pox infection. CONCLUSIONS/SIGNIFICANCE: This is the first demonstration, to our knowledge, of parasite-specific antibody responses to multiple classes of parasites in a wild population of birds. Darwin's finches initiated acquired immune responses to novel parasites. Our study has vital implications for invasion biology and ecological immunology. The adaptive immune response of Darwin's finches may help combat the negative effects of parasitism. Alternatively, the physiological cost of mounting such a response could outweigh any benefits, accelerating population decline. Tests

  16. Ecoimmunity in Darwin's finches: invasive parasites trigger acquired immunity in the medium ground finch (Geospiza fortis).

    Science.gov (United States)

    Huber, Sarah K; Owen, Jeb P; Koop, Jennifer A H; King, Marisa O; Grant, Peter R; Grant, B Rosemary; Clayton, Dale H

    2010-01-06

    Invasive parasites are a major threat to island populations of animals. Darwin's finches of the Galápagos Islands are under attack by introduced pox virus (Poxvirus avium) and nest flies (Philornis downsi). We developed assays for parasite-specific antibody responses in Darwin's finches (Geospiza fortis), to test for relationships between adaptive immune responses to novel parasites and spatial-temporal variation in the occurrence of parasite pressure among G. fortis populations. We developed enzyme-linked immunosorbent assays (ELISAs) for the presence of antibodies in the serum of Darwin's finches specific to pox virus or Philornis proteins. We compared antibody levels between bird populations with and without evidence of pox infection (visible lesions), and among birds sampled before nesting (prior to nest-fly exposure) versus during nesting (with fly exposure). Birds from the Pox-positive population had higher levels of pox-binding antibodies. Philornis-binding antibody levels were higher in birds sampled during nesting. Female birds, which occupy the nest, had higher Philornis-binding antibody levels than males. The study was limited by an inability to confirm pox exposure independent of obvious lesions. However, the lasting effects of pox infection (e.g., scarring and lost digits) were expected to be reliable indicators of prior pox infection. This is the first demonstration, to our knowledge, of parasite-specific antibody responses to multiple classes of parasites in a wild population of birds. Darwin's finches initiated acquired immune responses to novel parasites. Our study has vital implications for invasion biology and ecological immunology. The adaptive immune response of Darwin's finches may help combat the negative effects of parasitism. Alternatively, the physiological cost of mounting such a response could outweigh any benefits, accelerating population decline. Tests of the fitness implications of parasite-specific immune responses in Darwin

  17. Analyzing Dyadic Data With Multilevel Modeling Versus Structural Equation Modeling: A Tale of Two Methods.

    Science.gov (United States)

    Ledermann, Thomas; Kenny, David A

    2017-02-06

    Multilevel modeling (MLM) and structural equation modeling (SEM) are the dominant methods for the analysis of dyadic data. Both methods are extensively reviewed for the widely used actor-partner interdependence model and the dyadic growth curve model, as well as other less frequently adopted models, including the common fate model and the mutual influence model. For each method, we discuss the analysis of distinguishable and indistinguishable members, the treatment of missing data, the standardization of effects, and tests of mediation. Even though there has been some blending of the 2 methods, each method has its own advantages and disadvantages, thus both should be in the toolbox of dyadic researchers. (PsycINFO Database Record

  18. Dynamical Monte Carlo method for stochastic epidemic models

    CERN Document Server

    Aiello, O E

    2002-01-01

    A new approach to Dynamical Monte Carlo Methods is introduced to simulate markovian processes. We apply this approach to formulate and study an epidemic Generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta method in a region of deterministic solution. Introducing local stochastic interactions, the Runge-Kutta method is not applicable, and we solve and check it self-consistently with a stochastic version of the Euler Method. The results are also analyzed under the herd-immunity concept.

  19. Mathematical model for corundum single crystal growth by Verneuil method

    Science.gov (United States)

    Grzymkowski, Radosław; Mochnacki, Bohdan; Suchy, Józef

    1983-05-01

    A mathematical model which is an attempt to describe the complex process of monocrystallization by the Verneuil method is presented. The problem has been solved through the method of finite differences and at the same time making use of a certain modification of the mathematical description of Stefan's problem called the the alternating phase truncation method [9]. The elaborated algorithm and the examples of solutions given at the end of the present study point at the usefulness of the presented method of numerical simulation for modern designing and controlling the processes of crystal production.

  20. A Novel Fast Method for Point-sampled Model Simplification

    Directory of Open Access Journals (Sweden)

    Cao Zhi

    2016-01-01

    Full Text Available A novel fast simplification method for point-sampled statue model is proposed. Simplifying method for 3d model reconstruction is a hot topic in the field of 3D surface construction. But it is difficult as point cloud of many 3d models is very large, so its running time becomes very long. In this paper, a two-stage simplifying method is proposed. Firstly, a feature-preserved non-uniform simplification method for cloud points is presented, which simplifies the data set to remove the redundancy while keeping down the features of the model. Secondly, an affinity clustering simplifying method is used to classify the point cloud into a sharp point or a simple point. The advantage of Affinity Propagation clustering is passing messages among data points and fast speed of processing. Together with the re-sampling, it can dramatically reduce the duration of the process while keep a lower memory cost. Both theoretical analysis and experimental results show that after the simplification, the performance of the proposed method is efficient as well as the details of the surface are preserved well.

  1. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...... that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what...... are the limitations of different types of mod - els? This paper will provide examples of models that have been published in the literature for use across bioreactor scales, including computational fluid dynamics (CFD) and population balance models. Furthermore, the importance of good modeling practice...

  2. Modeling method and preliminary model of Asteroid Toutatis from Chang'E-2 optical images

    Science.gov (United States)

    Li, Xiang-Yu; Qiao, Dong

    2014-06-01

    Shape modeling is fundamental to the analysis of dynamic environment and motion around asteroid. Chang'E-2 successfully made a flyby of Asteroid 4179 Toutatis and obtained plenty of high-resolution images during the mission. In this paper, the modeling method and preliminary model of Asteroid Toutatis are discussed. First, the optical images obtained by Chang'E-2 are analyzed. Terrain and silhouette features in images are described. Then, the modeling method based on previous radar model and preliminary information from optical images is proposed. A preliminary polyhedron model of Asteroid Toutatis is established. Finally, the spherical harmonic coefficients of Asteroid Toutatis based on the polyhedron model are obtained. Some parameters of model are analyzed and compared. Although the model proposed in this paper is only a preliminary model, this work offers a valuable reference for future high-resolution models.

  3. A Pansharpening Method Based on HCT and Joint Sparse Model

    Directory of Open Access Journals (Sweden)

    XU Ning

    2016-04-01

    Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.

  4. Numerical Methods for the Lévy LIBOR model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods ...... reduce this growth from exponential to quadratic in an approximation using truncated expansions of the product terms. We include numerical illustrations of the accuracy and speed of our method pricing caplets, swaptions and forward rate agreements........ This enables simultaneous calculation of derivative prices of different maturities using parallel computing. Secondly, the product terms occuring in the drift of a LIBOR Market model driven by a jump process grow exponentially as a function of the number of rates, quickly rendering the model intractable. We...

  5. Quantitative Analysis of Polarimetric Model-Based Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2016-11-01

    Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a

  6. Numerical modeling of spray combustion with an advanced VOF method

    Science.gov (United States)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  7. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  8. A Review on Evaluation Methods of Climate Modeling

    Institute of Scientific and Technical Information of China (English)

    ZHAO; Zong-Ci; LUO; Yong; HUANG; Jian-Bin

    2013-01-01

    There is scientific progress in the evaluation methods of recent Earth system models(ESMs).Methods range from single variable to multi-variables,multi-processes,multi-phenomena quantitative evaluations in five layers(spheres)of the Earth system,from climatic mean assessment to climate change(such as trends,periodicity,interdecadal variability),extreme values,abnormal characters and quantitative evaluations of phenomena,from qualitative assessment to quantitative calculation of reliability and uncertainty for model simulations.Researchers started considering independence and similarity between models in multi-model use,as well as the quantitative evaluation of climate prediction and projection efect and the quantitative uncertainty contribution analysis.In this manuscript,the simulations and projections by both CMIP5 and CMIP3 that have been published after 2007 are reviewed and summarized.

  9. Variable cluster analysis method for building neural network model

    Institute of Scientific and Technical Information of China (English)

    王海东; 刘元东

    2004-01-01

    To address the problems that input variables should be reduced as much as possible and explain output variables fully in building neural network model of complicated system, a variable selection method based on cluster analysis was investigated. Similarity coefficient which describes the mutual relation of variables was defined. The methods of the highest contribution rate, part replacing whole and variable replacement are put forwarded and deduced by information theory. The software of the neural network based on cluster analysis, which can provide many kinds of methods for defining variable similarity coefficient, clustering system variable and evaluating variable cluster, was developed and applied to build neural network forecast model of cement clinker quality. The results show that all the network scale, training time and prediction accuracy are perfect. The practical application demonstrates that the method of selecting variables for neural network is feasible and effective.

  10. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  11. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...... of this variation is due to changes in the enzyme structure at distances more than 5 Å from the active site. There are significant differences between the results obtained by pure quantum methods and those from mixed quantum and molecular mechanics methods....

  12. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  13. Discrete gradient methods for solving variational image regularisation models

    Science.gov (United States)

    Grimm, V.; McLachlan, Robert I.; McLaren, David I.; Quispel, G. R. W.; Schönlieb, C.-B.

    2017-07-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting.

  14. A method of 3D modeling and codec

    Institute of Scientific and Technical Information of China (English)

    QI Yue; YANG Shen; CAI Su; HOU Fei; SHEN XuKun; ZHAO QinPing

    2009-01-01

    3D modeling and codec of real objects are hot Issues in the field of virtual reality. In this paper, we propose an automatic registration two range Images method and a cycle based automatic global reg-istration algorithm for rapidly and automatically registering all range Images and constructing a real-istic 3D model. Besides, to meet the requirement of huge data transmission over Internet, we present a 3D mesh encoding/decoding method for encoding geometry, topology and attribute data with high compression ratio and supporting progressive transmission. The research results have already been applied successfully in digital museum.

  15. A meshless method for modeling convective heat transfer

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David B [Los Alamos National Laboratory

    2010-01-01

    A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.

  16. Integrated Modeling and Intelligent Control Methods of Grinding Process

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2013-01-01

    Full Text Available The grinding process is a typical complex nonlinear multivariable process with strongly coupling and large time delays. Based on the data-driven modeling theory, the integrated modeling and intelligent control method of grinding process is carried out in the paper, which includes the soft-sensor model of economic and technique indexes, the optimized set-point model utilizing case-based reasoning, and the self-tuning PID decoupling controller. For forecasting the key technology indicators (grinding granularity and mill discharge rate of grinding process, an adaptive soft-sensor modeling method based on wavelet neural network optimized by the improved shuffled frog leaping algorithm (ISFLA is proposed. Then, a set point optimization control strategy of grinding process based on case-based reasoning (CBR method is adopted to obtain the optimized velocity set-point of ore feed and pump water feed in the grinding process controlled loops. Finally, a self-tuning PID decoupling controller optimized is used to control the grinding process. Simulation results and industrial application experiments clearly show the feasibility and effectiveness of control methods and satisfy the real-time control requirements of the grinding process.

  17. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  18. A new approach of high speed cutting modelling: SPH method

    OpenAIRE

    LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc

    2006-01-01

    The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A lagrangian Smoothed Particle Hydrodynamics (SPH) based model is carried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a “natural” workpiece/chip separation. Estimated chip morphology and cutting forces are compared to machining dedicated code results and experimenta...

  19. SPH method applied to high speed cutting modelling

    OpenAIRE

    LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc

    2007-01-01

    The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A Lagrangian smoothed particle hydrodynamics (SPH)- based model is arried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a "natural" workpiece/chip separation. The developed approach is compared to machining dedicated code results and experimental data. The SPH cutting...

  20. Solvent effect modelling of isocyanuric products synthesis by chemometric methods

    OpenAIRE

    Havet, Jean-Louis; Billiau-Loreau, Myriam; Porte, Catherine; Delacroix, Alain

    2002-01-01

    Chemometric tools were used to generate the modelling of solvent e¡ects on the N-alkylation of an isocyanuric acid salt. The method proceeded from a central composite design applied on the Carlson solvent classification using principal components analysis. The selectivity of the reaction was studied from the production of different substituted isocyanuric derivatives. Response graphs were obtained for each compound and used to devise a strategy for solvent selection. The prediction models wer...

  1. Propagator-based methods for recursive subspace model identification

    OpenAIRE

    Mercère, Guillaume; Bako, Laurent; Lecoeuche, Stéphane

    2008-01-01

    International audience; The problem of the online identification of multi-input multi-output (MIMO) state-space models in the framework of discrete-time subspace methods is considered in this paper. Several algorithms, based on a recursive formulation of the MIMO output error state-space (MOESP) identification class, are developed. The main goals of the proposed methods are to circumvent the huge complexity of eigenvalues or singular values decomposition techniques used by the offline algorit...

  2. [The use of a modelling method in craniometry].

    Science.gov (United States)

    Abramov, S S; Boldyrev, N I; Vishniakov, G N; Levin, G G; Naumov, A A

    1998-01-01

    Laser interferometry is proposed for accurate measurements of the external parameters and fixation of the relief of human skull surface. This method creates a detailed three-dimensional computer model of the object of investigation, which can be used in automated systems of personality identification based on investigation of the skull and life-time photos. Further development of the method opens new vistas in automation of the trassological and ballistic identification.

  3. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  4. On the use of simplex methods in constructing quadratic models

    Institute of Scientific and Technical Information of China (English)

    Qing-hua ZHOU

    2007-01-01

    In this paper, we investigate the quadratic approximation methods. After studying the basic idea of simplex methods, we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces. And the quadratic model is solved in the new subspaces. The motivation is to use the information disclosed by the former steps to construct more promising directions. For most tested problems, the number of function evaluations have been reduced obviously through our algorithms.

  5. Propensity score modelling in observational studies using dimension reduction methods.

    Science.gov (United States)

    Ghosh, Debashis

    2011-07-01

    Conditional independence assumptions are very important in causal inference modelling as well as in dimension reduction methodologies. These are two very strikingly different statistical literatures, and we study links between the two in this article. The concept of covariate sufficiency plays an important role, and we provide theoretical justification when dimension reduction and partial least squares methods will allow for valid causal inference to be performed. The methods are illustrated with application to a medical study and to simulated data.

  6. TOWARDS A SYSTEM DYNAMICS MODELING METHOD BASED ON DEMATEL

    Directory of Open Access Journals (Sweden)

    Fadwa Chaker

    2015-05-01

    Full Text Available If System Dynamics (SD models are constructed based solely on decision makers' mental models and understanding of the context subject to study, then the resulting systems must necessarily bear some degree of deficiency due to the subjective, limited, and internally inconsistent mental models which led to the conception of these systems. As such, a systematic method for constructing SD models could be essentially helpful in overcoming the biases dictated by the human mind's limited understanding and conceptualization of complex systems. This paper proposes a novel combined method to support SD model construction. The classical Decision Making Trial and Evaluation Laboratory (DEMATEL technique is used to define causal relationships among variables of a system, and to construct the corresponding Impact Relation Maps (IRMs. The novelty of this paper stems from the use of the resulting total influence matrix to derive the system dynamic's Causal Loop Diagram (CLD and then define variable weights in the stock-flow chart equations. This new method allows to overcome the subjectivity bias of SD modeling while projecting DEMATEL in a more dynamic simulation environment, which could significantly improve the strategic choices made by analysts and policy makers.

  7. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-05-01

    Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  8. Short Polymer Modeling using Self-Consistent Integral Equation Method

    Science.gov (United States)

    Kim, Yeongyoon; Park, So Jung; Kim, Jaeup

    2014-03-01

    Self-consistent field theory (SCFT) is an excellent mean field theoretical tool for predicting the morphologies of polymer based materials. In the standard SCFT, the polymer is modeled as a Gaussian chain which is suitable for a polymer of high molecular weight, but not necessarily for a polymer of low molecular weight. In order to overcome this limitation, Matsen and coworkers have recently developed SCFT of discrete polymer chains in which one polymer is modeled as finite number of beads joined by freely jointed bonds of fixed length. In their model, the diffusion equation of the canonical SCFT is replaced by an iterative integral equation, and the full spectral method is used for the production of the phase diagram of short block copolymers. In this study, for the finite length chain problem, we apply pseudospectral method which is the most efficient numerical scheme to solve the iterative integral equation. We use this new numerical method to investigate two different types of polymer bonds: spring-beads model and freely-jointed chain model. By comparing these results with those of the Gaussian chain model, the influences on the morphologies of diblock copolymer melts due to the chain length and the type of bonds are examined. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (no. 2012R1A1A2043633).

  9. An alternative method for centrifugal compressor loading factor modelling

    Science.gov (United States)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function – loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  10. "Storm Alley" on Saturn and "Roaring Forties" on Earth: two bright phenomena of the same origin

    Science.gov (United States)

    Kochemasov, G. G.

    2009-04-01

    "Storm Alley" on Saturn and "Roaring Forties' on Earth: two bright phenomena of the same origin. G. Kochemasov IGEM of the Russian Academy of Sciences, Moscow, Russia, kochem.36@mail.ru Persisting swirling storms around 35 parallel of the southern latitude in the Saturnian atmosphere and famous "Roaring Forties" of the terrestrial hydro- and atmosphere are two bright phenomena that should be explained by the same physical law. The saturnian "Storm Alley" (as it is called by the Cassini scientists) is a stable feature observed also by "Voyager". The Earth's "Roaring Forties" are well known to navigators from very remote times. The wave planetology [1-3 & others] explains this similarity by a fact that both atmospheres belong to rotating globular planets. This means that the tropic and extra-tropic belts of these bodies have differing angular momenta. Belonging to one body these belts, naturally, tend to equilibrate their angular momenta mainly by redistribution of masses and densities [4]. But a perfect equilibration is impossible as long as a rotating body (Saturn or Earth or any other) keeps its globular shape due to mighty gravity. So, a contradiction of tropics and extra-tropics will be forever and the zone mainly between 30 to 50 degrees in both hemispheres always will be a zone of friction, turbulence and strong winds. Some echoes of these events will be felt farther poleward up to 70 degrees. On Earth the Roaring Forties (40˚-50˚) have a continuation in Furious Fifties (50˚-60˚) and Shrieking (Screaming) Sixties (below 60˚, close to Antarctica). Below are some examples of excited atmosphere of Saturn imaged by Cassini. PIA09734 - storms within 46˚ south; PIA09778 - monitoring the Maelstrom, 44˚ north; PIA09787 - northern storms, 59˚ north; PIA09796 - cloud details, 44˚ north; PIA10413 - storms of the high north, 70˚ north; PIA10411 - swirling storms, "Storm Alley", 35˚ south; PIA10457 - keep it rolling, "Storm Alley", 35˚ south; PIA10439 - dance

  11. Role Modeling: A Modeling Method for Software Pattern at Knowledge Level

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on dominant degree of role model among the viewpoints forobj ect oriented modeling process, it dissertates that role modeling is a mo deling method for software pattern at knowledge level. After giving some example s for modeling design pattern and analysis pattern at knowledge level using role model, it presents a process for refining design pattern from role mode l to class model and event trace diagram of UML. In this paper, we advocate the opinion that role modeling before object modeling of UML.

  12. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  13. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  14. An Updating Method for Structural Dynamics Models with Uncertainties

    Directory of Open Access Journals (Sweden)

    B. Faverjon

    2008-01-01

    Full Text Available One challenge in the numerical simulation of industrial structures is model validation based on experimental data. Among the indirect or parametric methods available, one is based on the “mechanical” concept of constitutive relation error estimator introduced in order to quantify the quality of finite element analyses. In the case of uncertain measurements obtained from a family of quasi-identical structures, parameters need to be modeled randomly. In this paper, we consider the case of a damped structure modeled with stochastic variables. Polynomial chaos expansion and reduced bases are used to solve the stochastic problems involved in the calculation of the error.

  15. Construct Method of Predicting Satisfaction Model Based on Technical Characteristics

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-an; DENG Qian; SUN Guan-long; ZHANG Wei-she

    2011-01-01

    In order to construct objective relatively mapping relationship model between customer requirements and product technical characteristics, a novel approach based on customer satisfactions information digging from case products and satisfaction information of expert technical characteristics was put forward in this paper. Technical characteristics evaluation values were expressed by rough number, and technical characteristics target sequence was determined on the basis of efficiency, cost type and middle type in this method. Use each calculated satisfactions of customers and technical characteristics as input and output elements to construct BP network model. And we use MATLAB software to simulate this BP network model based on the case of electric bicycles.

  16. Toric Lego: A method for modular model building

    CERN Document Server

    Balasubramanian, Vijay; García-Etxebarria, Iñaki

    2010-01-01

    Within the context of local type IIB models arising from branes at toric Calabi-Yau singularities, we present a systematic way of joining any number of desired sectors into a consistent theory. The different sectors interact via massive messengers with masses controlled by tunable parameters. We apply this method to a toy model of the minimal supersymmetric standard model (MSSM) interacting via gauge mediation with a metastable supersymmetry breaking sector and an interacting dark matter sector. We discuss how a mirror procedure can be applied in the type IIA case, allowing us to join certain intersecting brane configurations through massive mediators.

  17. Toric Lego: a method for modular model building

    Science.gov (United States)

    Balasubramanian, Vijay; Berglund, Per; García-Etxebarria, Iñaki

    2010-01-01

    Within the context of local type IIB models arising from branes at toric Calabi-Yau singularities, we present a systematic way of joining any number of desired sectors into a consistent theory. The different sectors interact via massive messengers with masses controlled by tunable parameters. We apply this method to a toy model of the minimal supersymmetric standard model (MSSM) interacting via gauge mediation with a metastable supersymmetry breaking sector and an interacting dark matter sector. We discuss how a mirror procedure can be applied in the type IIA case, allowing us to join certain intersecting brane configurations through massive mediators.

  18. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    Science.gov (United States)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  19. DIVA: an iterative method for building modular integrated models

    Science.gov (United States)

    Hinkel, J.

    2005-08-01

    Integrated modelling of global environmental change impacts faces the challenge that knowledge from the domains of Natural and Social Science must be integrated. This is complicated by often incompatible terminology and the fact that the interactions between subsystems are usually not fully understood at the start of the project. While a modular modelling approach is necessary to address these challenges, it is not sufficient. The remaining question is how the modelled system shall be cut down into modules. While no generic answer can be given to this question, communication tools can be provided to support the process of modularisation and integration. Along those lines of thought a method for building modular integrated models was developed within the EU project DINAS-COAST and applied to construct a first model, which assesses the vulnerability of the world's coasts to climate change and sea-level-rise. The method focuses on the development of a common language and offers domain experts an intuitive interface to code their knowledge in form of modules. However, instead of rigorously defining interfaces between the subsystems at the project's beginning, an iterative model development process is defined and tools to facilitate communication and collaboration are provided. This flexible approach has the advantage that increased understanding about subsystem interactions, gained during the project's lifetime, can immediately be reflected in the model.

  20. Novel Extrapolation Method in the Monte Carlo Shell Model

    CERN Document Server

    Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.

  1. Equation oriented method for Rectisol wash modeling and analysis☆

    Institute of Scientific and Technical Information of China (English)

    Ning Gao; Chi Zhai; Wei Sun; Xingyu Zhang

    2015-01-01

    Rectisol process is more efficient in comparison with other physical or chemical absorption methods for gas pu-rification. To implement a real time simulation of Rectisol process, thermodynamic model and simulation strat-egy are needed. In this paper, a method of modified statistical associated fluid theory with perturbation theory is used to predict thermodynamic behavior of process. As Rectisol process is a highly heat-integrated process with many loops, a method of equation oriented strategy, sequential quadratic programming, is used as the solver and the process converges perfectly. Then analyses are conducted with this simulator.

  2. Neural Network method for Inverse Modeling of Material Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D., Jr.; Ivezic, N.D.; Zacharia, T.

    1999-07-10

    A method is described for inverse modeling of material deformation in applications of importance to the sheet metal forming industry. The method was developed in order to assess the feasibility of utilizing empirical data in the early stages of the design process as an alternative to conventional prototyping methods. Because properly prepared and employed artificial neural networks (ANN) were known to be capable of codifying and generalizing large bodies of empirical data, they were the natural choice for the application. The product of the work described here is a desktop ANN system that can produce in one pass an accurate die design for a user-specified part shape.

  3. A method to manage the model base in DSS

    Institute of Scientific and Technical Information of China (English)

    孙成双; 李桂君

    2004-01-01

    How to manage and use models in DSS is a most important subject. Generally, it costs a lot of money and time to develop the model base management system in the development of DSS and most are simple in function or cannot be used efficiently in practice. It is a very effective, applicable, and economical choice to make use of the interfaces of professional computer software to develop a model base management system. This paper presents the method of using MATLAB, a well-known statistics software, as the development platform of a model base management system. The main functional framework of a MATLAB-based model base managementsystem is discussed. Finally, in this paper, its feasible application is illustrated in the field of construction projects.

  4. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  5. The Quantum Inverse Scattering Method for Hubbard-like Models

    CERN Document Server

    Martins, M J

    1997-01-01

    This work is concerned with various aspects of the formulation of the quantum inverse scattering method for the one-dimensional Hubbard model. We first establish the essential tools to solve the eigenvalue problem for the transfer matrix of the classical ``covering'' Hubbard model within the algebraic Bethe Ansatz framework. The fundamental commutation rules exhibit a hidden 6-vertex symmetry which plays a crucial role in the whole algebraic construction. Next we apply this formalism to study the SU(2) highest weights properties of the eigenvectors and the solution of a related coupled spin model with twisted boundary conditions. The machinery developed in this paper is applicable to many other models, and as an example we present the algebraic solution of the Bariev XY coupled model.

  6. Involving stakeholders in building integrated fisheries models using Bayesian methods.

    Science.gov (United States)

    Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari

    2013-06-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.

  7. Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods

    Science.gov (United States)

    Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari

    2013-06-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.

  8. PREDETERMINATION OF NATURAL ILLUMINATION BY THE MODEL TESTING METHOD.

    Science.gov (United States)

    PENA, WILLIAM A.

    NEW EDUCATIONAL SPECIFICATIONS HAVE CAUSED ARCHITECTS TO USE NEW FORMS WITH THEIR RESULTING NATURAL LIGHTING PROBLEMS. THE PROBLEM CAN BE ENGINEERED WITH THE USE OF MODELS. PREDICTION OF LIGHTING PERFORMANCE IN A BUILDING CAN BE MADE EARLY IN PLANNING. THIS METHOD PROVIDES FOR THE TESTING OF A VARIETY OF TRIAL SCHEMES ECONOMICALLY AND RAPIDLY.…

  9. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...

  10. Measuring balance and model selection in propensity score methods

    NARCIS (Netherlands)

    Belitser, S.; Martens, Edwin P.; Pestman, Wiebe R.; Groenwold, Rolf H.H.; De Boer, Anthonius; Klungel, Olaf H.

    2011-01-01

    Background: Propensity score (PS) methods focus on balancing confounders between groups to estimate an unbiased treatment or exposure effect. However, there is lack of attention in actually measuring, reporting and using the information on balance, for instance for model selection. Objectives: To de

  11. Spectral density method to Anderson-Holstein model

    Science.gov (United States)

    Chebrolu, Narasimha Raju; Chatterjee, Ashok

    2015-06-01

    Two-parameter spectral density function of a magnetic impurity electron in a non-magnetic metal is calculated within the framework of the Anderson-Holstein model using the spectral density approximation method. The effect of electron-phonon interaction on the spectral function is investigated.

  12. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    DRINIE

    2003-07-03

    Jul 3, 2003 ... a useful screening tool for identifying sites where, without reduction of pollution, the water ... or “Q-C” modelling method) developed to inter-relate water quality ..... Pretoria. 7/1-7/33. MALAN HL and DAY JA (2002a) Development of Numerical ... (1996) Trends in New Zealand's national river water quality.

  13. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  14. Models and Methods for Assessing Refugee Mental Health Needs.

    Science.gov (United States)

    Deinard, Amos S.; And Others

    This background paper on refugee needs assessment discusses the assumptions, goals, objectives, strategies, models, and methods that the state refugee programs can consider in designing their strategies for assessing the mental health needs of refugees. It begins with a set of background assumptions about the ethnic profile of recent refugee…

  15. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  16. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  17. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  18. Method for modeling post-mortem biometric 3D fingerprints

    Science.gov (United States)

    Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.

    2016-05-01

    Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.

  19. Hybrid Perturbation methods based on Statistical Time Series models

    CERN Document Server

    San-Juan, Juan Félix; Pérez, Iván; López, Rosario

    2016-01-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of a...

  20. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  1. Modeling of electromigration salt removal methods in building materials

    DEFF Research Database (Denmark)

    Johannesson, Björn; Ottosen, Lisbeth M.

    2008-01-01

    A model is established for the prediction of the effect of salt removal of building materials using electromigration. Salt-induced decay of building materials, such as masonry and sandstone, is a serious threat to our cultural heritage. Electromigration of salts from building materials, sensitive...... for salt attack of various kinds, is one potential method to preserve old building envelopes. By establishing a model for ionic multi-species diffusion, which also accounts for external applied electrical fields, it is proposed that an important complement to the experimental tests and that verification...... can be obtained. One important issue is to be able to optimizing the salt removing electromagration method in the field by first studying it theoretically. Another benefit is that models can give some answers concerning the effect of the inner surfaces of the material on the diffusion mechanisms...

  2. Models and methods for hot spot safety work

    DEFF Research Database (Denmark)

    Vistisen, Dorte

    2002-01-01

    is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time......Despite the fact that millions DKK each year are spent on improving roadsafety in Denmark, funds for traffic safety are limited. It is therefore vital to spend the resources as effectively as possible. This thesis is concerned with the area of traffic safety denoted "hot spot safety work", which...

  3. A constructive model potential method for atomic interactions

    Science.gov (United States)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  4. Models and methods for hot spot safety work

    DEFF Research Database (Denmark)

    Vistisen, Dorte

    2002-01-01

    is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time......Despite the fact that millions DKK each year are spent on improving roadsafety in Denmark, funds for traffic safety are limited. It is therefore vital to spend the resources as effectively as possible. This thesis is concerned with the area of traffic safety denoted "hot spot safety work", which...

  5. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  6. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives.

    Science.gov (United States)

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-04-01

    Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the 'change-in-estimate' (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE).

  7. Developing energy forecasting model using hybrid artificial intelligence method

    Institute of Scientific and Technical Information of China (English)

    Shahram Mollaiy-Berneti

    2015-01-01

    An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.

  8. Unicriterion Model: A Qualitative Decision Making Method That Promotes Ethics

    Directory of Open Access Journals (Sweden)

    Fernando Guilherme Silvano Lobo Pimentel

    2011-06-01

    Full Text Available Management decision making methods frequently adopt quantitativemodels of several criteria that bypass the question of whysome criteria are considered more important than others, whichmakes more difficult the task of delivering a transparent viewof preference structure priorities that might promote ethics andlearning and serve as a basis for future decisions. To tackle thisparticular shortcoming of usual methods, an alternative qualitativemethodology of aggregating preferences based on the rankingof criteria is proposed. Such an approach delivers a simpleand transparent model for the solution of each preference conflictfaced during the management decision making process. Themethod proceeds by breaking the decision problem into ‘two criteria– two alternatives’ scenarios, and translating the problem ofchoice between alternatives to a problem of choice between criteriawhenever appropriate. The unicriterion model method is illustratedby its application in a car purchase and a house purchasedecision problem.

  9. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  10. Stable isotope separation in calutrons: Forty years of production and distribution

    Energy Technology Data Exchange (ETDEWEB)

    Bell, W.A.; Tracy, J.G.

    1987-11-01

    The stable isotope separation program, established in 1945, has operated continually to provide enriched stable isotopes and selected radioactive isotopes, including the actinides, for use in research, medicine, and industrial applications. This report summarizes the first forty years of effort in the production and distribution of stable isotopes. Evolution of the program along with the research and development, chemical processing, and production efforts are highlighted. A total of 3.86 million separator hours has been utilized to separate 235 isotopes of 56 elements. Relative effort expended toward processing each of these elements is shown. Collection rates (mg/separator h), which vary by a factor of 20,000 from the highest to the lowest (/sup 205/Tl to /sup 46/Ca), and the attainable isotopic purity for each isotope are presented. Policies related to isotope pricing, isotope distribution, and support for the enrichment program are discussed. Changes in government funding, coupled with large variations in sales revenue, have resulted in 7-fold perturbations in production levels.

  11. Forty Years of E/PO: Can You Have it All? (Invited)

    Science.gov (United States)

    Reiff, P. H.

    2013-12-01

    In forty years of education and public outreach (E/PO), 25 years of which have been funded by various NSF and NASA programs, several lessons (some tough) have been learned. We have done teacher workshops, teacher semester-long courses, student summer programs, outreach fairs and exhibits, and generally the response of the participants has been very high. Generally the longer programs reach fewer people but in greater depth and impact; the shorter programs reach more, but with lesser depth. This paper shows some of the statistics of learning in our various venues, include teacher courses, online material, and planetarium shows. We also performed an online survey of users of NASA materials and contrasted with a random group of 144 adults. We find that teachers and museum educators have nearly all been "significantly" or "changed my life" impacted by NASA educational materials, and even 24% of the general public have as well, with 14% of the general public reporting that NASA encouraged them to study STEM and go into STEM careers. Virtually all said that NASA should continue producing educational materials. Some of the stumbling blocks include: the difficulty of obtaining funds, the general lack of recognition for outreach in tenure decisions, the difficulty of trying to keep active in research while also active in outreach; and the general problem of "having a life" while juggling many responsibilities. Yet it is worth it!

  12. Sixty Days Remaining, Forty Years of CERN, Two Brothers, One Exclusive Interview

    CERN Multimedia

    2001-01-01

    Twins Marcel and Daniel Genolin while sharing memories of their CERN experiences, point out just how much smaller the Meyrin site once was. In a place such as CERN where the physical sciences are in many ways the essence of our daily lives and where technological advancement is an everyday occurrence, it is easy to lose track of the days, months, and even years. But last week twin brothers, Daniel and Marcel Genolin, hired in the early sixties and getting ready to end their eventful forty year CERN experiences, made it clear that the winds of time bluster past us whether we are aware or not. 'CERN was very small when we started' says Marcel, who has worked in transport during his entire time here. A lot has changed. 'When I got here there were no phones in peoples' houses' he recalls,'when there were problems in the control room with the PS (Proton Synchrotron) they used to get a megaphone and tell us {the transport service} to go and get the necessary physicists from their homes in the area. We had to lo...

  13. The Effectiveness of Hard Martial Arts in People over Forty: An Attempted Systematic Review

    Directory of Open Access Journals (Sweden)

    Gaby Pons van Dijk

    2014-04-01

    Full Text Available The objective was to assess the effect of hard martial arts on the physical fitness components such as balance, flexibility, gait, strength, cardiorespiratory function and several mental functions in people over forty. A computerized literature search was carried out. Studies were selected when they had an experimental design, the age of the study population was >40, one of the interventions was a hard martial art, and when at least balance and cardiorespiratory functions were used as an outcome measure. We included four studies, with, in total, 112 participants, aged between 51 and 93 years. The intervention consisted of Taekwondo or Karate. Total training duration varied from 17 to 234 h. All four studies reported beneficial effects, such as improvement in balance, in reaction tests, and in duration of single leg stance. We conclude that because of serious methodological shortcomings in all four studies, currently there is suggestive, but insufficient evidence, that hard martial arts practice improves physical fitness functions in healthy people over 40. However, considering the importance of such effects, and the low costs of the intervention, the potential of beneficial health effects of age-adapted, hard martial arts training, in people over 40, warrants further study.

  14. Cognition improvement in Taekwondo novices over forty. Results from the SEKWONDO Study.

    Directory of Open Access Journals (Sweden)

    Gaby ePons Van Dijk

    2013-11-01

    Full Text Available AbstractAge-related cognitive decline is associated with increased risk of disability, dementia and death. Recent studies suggest improvement in cognitive speed, attention and executive functioning with physical activity. However, whether such improvements are activity specific is unclear.Therefore, we aimed to study the effect of one year age-adapted Taekwondo training on several cognitive functions, including reaction/ motor time, information processing speed, and working and executive memory, in 24 healthy volunteers over forty.Reaction and motor time decreased with 41.2 seconds and 18.4 seconds (p=0.004, p=0.015, respectively. Digit symbol coding task improved with a mean of 3.7 digits (p=0.017. Digit span, letter fluency, and trail making test task-completion-time all improved, but not statistically significant. The questionnaire reported better reaction time in 10 and unchanged in 9 of the nineteen study compliers. In conclusion, our data suggest that age-adapted Taekwondo training improves various aspects of cognitive function in people over 40, which may, therefore, offer a cheap, safe and enjoyable way to mitigate age-related cognitive decline.

  15. Ontogeny of learning walks and the acquisition of landmark information in desert ants, Cataglyphis fortis.

    Science.gov (United States)

    Fleischmann, Pauline N; Christian, Marcelo; Müller, Valentin L; Rössler, Wolfgang; Wehner, Rüdiger

    2016-10-01

    At the beginning of their foraging lives, desert ants (Cataglyphis fortis) are for the first time exposed to the visual world within which they henceforth must accomplish their navigational tasks. Their habitat, North African salt pans, is barren, and the nest entrance, a tiny hole in the ground, is almost invisible. Although natural landmarks are scarce and the ants mainly depend on path integration for returning to the starting point, they can also learn and use landmarks successfully to navigate through their largely featureless habitat. Here, we studied how the ants acquire this information at the beginning of their outdoor lives within a nest-surrounding array of three artificial black cylinders. Individually marked 'newcomers' exhibit a characteristic sequence of learning walks. The meandering learning walks covering all directions of the compass first occur only within a few centimeters of the nest entrance, but then increasingly widen, until after three to seven learning walks, foraging starts. When displaced to a distant test field in which an identical array of landmarks has been installed, the ants shift their search density peaks more closely to the fictive goal position, the more learning walks they have performed. These results suggest that learning of a visual landmark panorama around a goal is a gradual rather than an instantaneous process. © 2016. Published by The Company of Biologists Ltd.

  16. The Modeling Library of Eavesdropping Methods in Quantum Cryptography Protocols by Model Checking

    Science.gov (United States)

    Yang, Fan; Yang, Guowu; Hao, Yujie

    2016-07-01

    The most crucial issue of quantum cryptography protocols is its security. There exists many ways to attack the quantum communication process. In this paper, we present a model checking method for modeling the eavesdropping in quantum information protocols. So when the security properties of a certain protocol are needed to be verified, we can directly use the models which are already built. Here we adopt the probabilistic model checking tool—PRISM to model these attack methods. The verification results show that the detection rate of eavesdropping is approximately close to 1 when enough photons are transmitted.

  17. Coulomb Collision for Plasma Simulations: Modelling and Numerical Methods

    Science.gov (United States)

    Geiser, Juergen

    2016-09-01

    We are motivated to model weakly ionized Plasma applications. The modeling problem is based on an incorporated explicit velocity-dependent small-angle Coulomb collision terms into a Fokker-Planck equation. Such a collision is done with so called test and field particles, which are scattered stochastically based on a Langevin equation. Based on such different model approaches, means the transport part is done with kinetic equations, while the collision part is done via the Langevin equations, we present a splitting of these models. Such a splitting allow us to combine different modeling parts. For the transport part, we can apply particle models and solve them with particle methods, e.g., PIC, while for the collision part, we can apply the explicit Coulomb collision model, e.g., with fast stochastic differential equation solvers. Additional, we also apply multiscale approaches for the different parts of the transport part, e.g., different time-scales of an explicit electric field, and model-order reduction approaches. We present first numerical results for particle simulations with the deterministic-stochastic splitting schemes. Such ideas can be applied to sputtering problems or plasma applications with dominant Coulomb collisions.

  18. Synthetic-Eddy Method for Urban Atmospheric Flow Modelling

    Science.gov (United States)

    Pavlidis, D.; Gorman, G. J.; Gomes, J. L. M. A.; Pain, C. C.; Apsimon, H.

    2010-08-01

    The computational fluid dynamics code Fluidity, with anisotropic mesh adaptivity, is used as a multi-scale obstacle-accommodating meteorological model. A novel method for generating realistic inlet boundary conditions based on the view of turbulence as a superposition of synthetic eddies is adopted. It is able to reproduce prescribed first-order and second-order one-point statistics and turbulence length scales. The aim is to simulate an urban boundary layer. The model is validated against two standard benchmark tests: a plane channel flow numerical simulation and a flow past a cube physical simulation. The performed large-eddy simulations are in good agreement with both reference models giving confidence that the model can be used to successfully simulate urban atmospheric flows.

  19. A general method for modeling biochemical and biomedical response

    Science.gov (United States)

    Ortiz, Roberto; Lerd Ng, Jia; Hughes, Tyler; Abou Ghantous, Michel; Bouhali, Othmane; Arredouani, Abdelilah; Allen, Roland

    2012-10-01

    The impressive achievements of biomedical science have come mostly from experimental research with human subjects, animal models, and sophisticated laboratory techniques. Additionally, theoretical chemistry has been a major aid in designing new drugs. Here we introduce a method which is similar to others already well known in theoretical systems biology, but which specifically addresses biochemical changes as the human body responds to medical interventions. It is common in systems biology to use first-order differential equations to model the time evolution of various chemical concentrations, and we as physicists can make a significant impact through designing realistic models and then solving the resulting equations. Biomedical research is rapidly advancing, and the technique presented in this talk can be applied in arbitrarily large models containing tens, hundreds, or even thousands of interacting species, to determine what beneficial effects and side effects may result from pharmaceuticals or other medical interventions.

  20. Storm surge model based on variational data assimilation method

    Institute of Scientific and Technical Information of China (English)

    Shi-li HUANG; Jian XU; De-guan WANG; Dong-yan LU

    2010-01-01

    By combining computation and observation information,the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting.It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge.By controlling the wind stress drag coefficient,the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon.In the data assimilation tests,the model accurately identified the wind stress drag coefficient and obtained results close to the true state.Then,the actual storm surge induced by Typhoon 0515 was forecast by the developed model,and the results demonstrate its efficiency in practical application.

  1. Differential expression of microRNAs in the non-permissive schistosome host Microtus fortis under schistosome infection.

    Directory of Open Access Journals (Sweden)

    Hongxiao Han

    Full Text Available The reed vole Microtus fortis is the only mammal known in China in which the growth, development and maturation of schistosomes (Schistosoma japonicum is prevented. It might be that the anti-schistosomiasis mechanisms of M. fortis associate with microRNA-mediated gene expression, given that the latter has been found to be involved in gene regulation in eukaryotes. In the present study, the difference between pathological changes in tissues of M. fortis and of mice (Mus musculus post-schistosome infection were observed by using hematoxylin-eosin staining. In addition, microarray technique was applied to identify differentially expressed miRNAs in the same tissues before and post-infection to analyze the potential roles of miRNAs in schistosome infection in these two different types of host. Histological analyses showed that S. japonicum infection in M. fortis resulted in a more intensive inflammatory response and pathological change than in mice. The microarray analysis revealed that 162 miRNAs were expressed in both species, with 12 in liver, 32 in spleen and 34 in lung being differentially expressed in M. fortis. The functions of the differentially expressed miRNAs were mainly revolved in nutrient metabolism, immune regulation, etc. Further analysis revealed that important signaling pathways were triggered after infection by S. japonicum in M. fortis but not in the mice. These results provide new insights into the general mechanisms of regulation in the non-permissive schistosome host M. fortis that exploits potential miRNA regulatory networks. Such information will help improve current understanding of schistosome development and host-parasite interactions.

  2. Differential expression of microRNAs in the non-permissive schistosome host Microtus fortis under schistosome infection.

    Science.gov (United States)

    Han, Hongxiao; Peng, Jinbiao; Han, Yanhui; Zhang, Min; Hong, Yang; Fu, Zhiqiang; Yang, Jianmei; Tao, Jianping; Lin, Jiaojiao

    2013-01-01

    The reed vole Microtus fortis is the only mammal known in China in which the growth, development and maturation of schistosomes (Schistosoma japonicum) is prevented. It might be that the anti-schistosomiasis mechanisms of M. fortis associate with microRNA-mediated gene expression, given that the latter has been found to be involved in gene regulation in eukaryotes. In the present study, the difference between pathological changes in tissues of M. fortis and of mice (Mus musculus) post-schistosome infection were observed by using hematoxylin-eosin staining. In addition, microarray technique was applied to identify differentially expressed miRNAs in the same tissues before and post-infection to analyze the potential roles of miRNAs in schistosome infection in these two different types of host. Histological analyses showed that S. japonicum infection in M. fortis resulted in a more intensive inflammatory response and pathological change than in mice. The microarray analysis revealed that 162 miRNAs were expressed in both species, with 12 in liver, 32 in spleen and 34 in lung being differentially expressed in M. fortis. The functions of the differentially expressed miRNAs were mainly revolved in nutrient metabolism, immune regulation, etc. Further analysis revealed that important signaling pathways were triggered after infection by S. japonicum in M. fortis but not in the mice. These results provide new insights into the general mechanisms of regulation in the non-permissive schistosome host M. fortis that exploits potential miRNA regulatory networks. Such information will help improve current understanding of schistosome development and host-parasite interactions.

  3. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  4. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model’s parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  5. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  6. A contoured continuum surface force model for particle methods

    Science.gov (United States)

    Duan, Guangtao; Koshizuka, Seiichi; Chen, Bin

    2015-10-01

    A surface tension model is essential to simulate multiphase flows with deformed interfaces. This study develops a contoured continuum surface force (CCSF) model for particle methods. A color function that varies sharply across the interface to mark different fluid phases is smoothed in the transition region, where the local contour curvature can be regarded as the interface curvature. The local contour passing through each reference particle in the transition region is extracted from the local profile of the smoothed color function. The local contour curvature is calculated based on the Taylor series expansion of the smoothed color function, whose derivatives are calculated accurately according to the definition of the smoothed color function. Two schemes are proposed to specify the smooth radius: fixed scheme, where 2 ×re (re = particle interaction radius) is assigned to all particles in the transition region; and varied scheme, where re and 2 ×re are assigned to the central and edged particles in the transition region respectively. Numerical examples, including curvature calculation for static circle and ellipse interfaces, deformation of square droplet to a circle (2D and 3D), droplet deformation in shear flow, and droplet coalescence, are simulated to verify the CCSF model and compare its performance with those of other methods. The CCSF model with the fixed scheme is proven to produce the most accurate curvature and lowest parasitic currents among the tested methods.

  7. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  8. Geomatic methods at the service of water resources modelling

    Science.gov (United States)

    Molina, José-Luis; Rodríguez-Gonzálvez, Pablo; Molina, Mª Carmen; González-Aguilera, Diego; Espejo, Fernando

    2014-02-01

    Acquisition, management and/or use of spatial information are crucial for the quality of water resources studies. In this sense, several geomatic methods arise at the service of water modelling, aiming the generation of cartographic products, especially in terms of 3D models and orthophotos. They may also perform as tools for problem solving and decision making. However, choosing the right geomatic method is still a challenge in this field. That is mostly due to the complexity of the different applications and variables involved for water resources management. This study is aimed to provide a guide to best practices in this context by tackling a deep review of geomatic methods and their suitability assessment for the following study types: Surface Hydrology, Groundwater Hydrology, Hydraulics, Agronomy, Morphodynamics and Geotechnical Processes. This assessment is driven by several decision variables grouped in two categories, classified depending on their nature as geometric or radiometric. As a result, the reader comes with the best choice/choices for the method to use, depending on the type of water resources modelling study in hand.

  9. A novel model reduction method based on balanced truncation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The main goal of this paper is to construct an efficient reduced-order model (ROM) for unsteady aerodynamic force modeling. Balanced truncation (BT) is presented to address the problem. For conventional BT method, it is necessary to compute exact controllability and observability grammians. Although it is relatively straightforward to compute these matrices in a control setting where the system order is moderate, the technique does not extend easily to high order systems. In response to the challenge, snapshots-BT (S-BT) method is introduced for high order system ROM construction. The outline idea of the S-BT method is that snapshots of primary and dual system approximate the controllability and observability matrices in the frequency domain. The method has been demonstrated for 3 high order systems: (1) unsteady motion of a two-dimensional airfoil in response to gust, (2) AGARD 445.6 wing aeroelastic system, and (3) BACT (benchmark active control technology) standard aeroservoelastic system. All the results indicate that S-BT based ROM is efficient and accurate enough to provide a powerful tool for unsteady aerodynamic force modeling.

  10. Numerical Methods for the Lévy LIBOR Model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets....

  11. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  12. Numerical method of slope failure probability based on Bishop model

    Institute of Scientific and Technical Information of China (English)

    SU Yong-hua; ZHAO Ming-hua; ZHANG Yue-ying

    2008-01-01

    Based on Bishop's model and by applying the first and second order mean deviations method, an approximative solution method for the first and second order partial derivatives of functional function was deduced according to numerical analysis theory. After complicated multi-independent variables implicit functional function was simplified to be a single independent variable implicit function and rule of calculating derivative for composite function was combined with principle of the mean deviations method, an approximative solution format of implicit functional function was established through Taylor expansion series and iterative solution approach of reliability degree index was given synchronously. An engineering example was analyzed by the method. The result shows its absolute error is only 0.78% as compared with accurate solution.

  13. Tramp Ship Routing and Scheduling - Models, Methods and Opportunities

    DEFF Research Database (Denmark)

    Vilhelmsen, Charlotte; Larsen, Jesper; Lusby, Richard Martin

    and scheduling. This includes a review on existing literature, modelling approaches, solution methods as well as an analysis of the current status and future opportunities of research within tramp ship routing and scheduling. We argue that rather than developing new solution methods for the basic routing...... to mergers, pooling, and collaboration efforts between shipping companies, the fleet sizes have grown to a point where manual planning is no longer adequate in a market with tough competition and low freight rates. The aim of this paper is to provide a comprehensive introduction to tramp ship routing...... and scheduling problem, focus should now be on extending this basic problem to include additional real-world complexities and develop suitable solution methods for those extensions. Such extensions will enable more tramp operators to benefit from the solution methods while simultaneously creating new...

  14. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  15. A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.

    Energy Technology Data Exchange (ETDEWEB)

    YUE,M.; SCHLUETER,R.

    2003-10-20

    A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.

  16. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  17. A Method to Identify Flight Obstacles on Digital Surface Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Min; LIN Xinggang; SUN Shouyu; WANG Youzhi

    2005-01-01

    In modern low-altitude terrain-following guidance, a constructing method of the digital surface model (DSM) is presented in the paper to reduce the threat to flying vehicles of tall surface features for safe flight. The relationship between an isolated obstacle size and the intervals of vertical- and cross-section in the DSM model is established. The definition and classification of isolated obstacles are proposed, and a method for determining such isolated obstacles in the DSM model is given. The simulation of a typical urban district shows that when the vertical- and cross-section DSM intervals are between 3 m and 25 m, the threat to terrain-following flight at low-altitude is reduced greatly, and the amount of data required by the DSM model for monitoring in real time a flying vehicle is also smaller. Experiments show that the optimal results are for an interval of 12.5 m in the vertical- and cross-sections in the DSM model, with a 1:10 000 DSM scale grade.

  18. A Relation-Based Modeling Method for Workshop Reconfiguration

    Institute of Scientific and Technical Information of China (English)

    LI Pan-jing; QIN Xian-sheng; WANG Ke-qin; LI Min

    2004-01-01

    To respond to the changes in the market rapidly, the workshop has become an ever-changing dynamic environment in regard to personnel change and organization alternation, etc. Therefore it is necessary to reconfigure the workshop system. In this paper, we present the point of view that the closer the relations are among elements in the system, the closer they should be connected with each other when they are integrated in designing and structural modeling of the workshop system. At first, this paper discusses the relationship among elements in the workshop system and events describing the relationship, and provides a technical overview of the expression, definition and classification of relationship. This paper focuses on the steps and algorithm to evaluate the degree of closeness of relations among elements in systems, and emphasizes the modeling methods for workshop reconfiguration by use of a fuzzy cluster. In light of the above steps and methods, types and contents of basic relationships among elements should be determined, and a standard relation tree should be set up. Then, correlation coefficients are calculated by the standard relation tree, and a fuzzy relation matrix is built up. After that, the structure modeling of the workshop equipment system can be completed through a fuzzy cluster. The paper eads with an application of a FMS ( Flexible Manufactuing System) function system modeling. Results of the modeling and calculations are presented.

  19. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  20. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  1. A Comparison of Two Smoothing Methods for Word Bigram Models

    CERN Document Server

    Petö, L B

    1994-01-01

    A COMPARISON OF TWO SMOOTHING METHODS FOR WORD BIGRAM MODELS Linda Bauman Peto Department of Computer Science University of Toronto Abstract Word bigram models estimated from text corpora require smoothing methods to estimate the probabilities of unseen bigrams. The deleted estimation method uses the formula: Pr(i|j) = lambda f_i + (1-lambda)f_i|j, where f_i and f_i|j are the relative frequency of i and the conditional relative frequency of i given j, respectively, and lambda is an optimized parameter. MacKay (1994) proposes a Bayesian approach using Dirichlet priors, which yields a different formula: Pr(i|j) = (alpha/F_j + alpha) m_i + (1 - alpha/F_j + alpha) f_i|j where F_j is the count of j and alpha and m_i are optimized parameters. This thesis describes an experiment in which the two methods were trained on a two-million-word corpus taken from the Canadian _Hansard_ and compared on the basis of the experimental perplexity that they assigned to a shared test corpus. The methods proved to be about equally ...

  2. A liquid drop model for embedded atom method cluster energies

    Science.gov (United States)

    Finley, C. W.; Abel, P. B.; Ferrante, J.

    1996-01-01

    Minimum energy configurations for homonuclear clusters containing from two to twenty-two atoms of six metals, Ag, Au, Cu, Ni, Pd, and Pt have been calculated using the Embedded Atom Method (EAM). The average energy per atom as a function of cluster size has been fit to a liquid drop model, giving estimates of the surface and curvature energies. The liquid drop model gives a good representation of the relationship between average energy and cluster size. As a test the resulting surface energies are compared to EAM surface energy calculations for various low-index crystal faces with reasonable agreement.

  3. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  4. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  5. Variable Neighborhood Simplex Search Methods for Global Optimization Models

    Directory of Open Access Journals (Sweden)

    Pongchanun Luangpaiboon

    2012-01-01

    Full Text Available Problem statement: Many optimization problems of practical interest are encountered in various fields of chemical, engineering and management sciences. They are computationally intractable. Therefore, a practical algorithm for solving such problems is to employ approximation algorithms that can find nearly optimums within a reasonable amount of computational time. Approach: In this study the hybrid methods combining the Variable Neighborhood Search (VNS and simplex’s family methods are proposed to deal with the global optimization problems of noisy continuous functions including constrained models. Basically, the simplex methods offer a search scheme without the gradient information whereas the VNS has the better searching ability with a systematic change of neighborhood of the current solution within a local search. Results: The VNS modified simplex method has a better searching ability for optimization problems with noise. The VNS modified simplex method also outperforms in average on the characteristics of intensity and diversity during the evolution of design point moving stage for the constrained optimization. Conclusion: The adaptive hybrid versions have proved to obtain significantly better results than the conventional methods. The amount of computation effort required for successful optimization is very sensitive to the rate of noise decrease of the process yields. Under circumstances of constrained optimization and gradually increasing the noise during an optimization the most preferred approach is the VNS modified simplex method.

  6. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  7. On the use of simplex methods in constructing quadratic models

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper,we investigate the quadratic approximation methods.After studying the basic idea of simplex methods,we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces.And the quadratic model is solved in the new subspaces.The motivation is to use the information disclosed by the former steps to construct more promising directions.For most tested problems,the number of function evaluations have been reduced obviously through our algorithms.

  8. (Environmental and geophysical modeling, fracture mechanics, and boundary element methods)

    Energy Technology Data Exchange (ETDEWEB)

    Gray, L.J.

    1990-11-09

    Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

  9. DIRECT INTEGRATION METHODS WITH INTEGRAL MODEL FOR DYNAMIC SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    吕和祥; 于洪洁; 裘春航

    2001-01-01

    A new approach which is a direct integration method with integral model ( DIM IM) to solve dynamic governing equations is presented. The governing equations are integrated into the integral equations. An algorithm with explicit and predict-correct and selfstarting and fourth-order accuracy to integrate the integral equations is given.Theoretical analysis and numerical examples show that DIM-IM discribed in this paper suitable for strong nonlinear and non-conservative system have higher accuracy than central difference, Houbolt , Newmark and Wilson- Theta methods.

  10. [Models and computation methods of EEG forward problem].

    Science.gov (United States)

    Zhang, Yinghcun; Zou, Ling; Zhu, Shanan

    2004-04-01

    The research of EEG is of grat significance and clinical importance in studying the cognitive function and neural activity of the brain. There are two key problems in the field of EEG, EEG forward problem and EEG inverse problem. EEG forward problem which aims to get the distribution of the scalp potential due to the known current distribution in the brain is the basis of the EEG inverse problem. Generally, EEG inverse problem depends on the accuracy and efficiency of the computational method of EEG forward problem. This paper gives a review of the head model and corresponding computational method about EEG forward problem studied in recent years.

  11. Space Object Tracking Method Based on a Snake Model

    Science.gov (United States)

    Zhan-wei, Xu; Xin, Wang

    2016-04-01

    In this paper, aiming at the problem of unstable tracking of low-orbit variable and bright space objects, adopting an active contour model, a kind of improved GVF (Gradient Vector Flow) - Snake algorithm is proposed to realize the real-time search of the real object contour on the CCD image. Combined with the Kalman filter for prediction, a new adaptive tracking method is proposed for space objects. Experiments show that this method can overcome the tracking error caused by the fixed window, and improve the tracking robustness.

  12. Transfer matrix methods in the Blume-Emery-Griffiths model

    Science.gov (United States)

    Koza, Zbigniew; Jasiukiewicz, Czesa̵w; Pȩkalski, Andrzej

    1990-03-01

    The critical properties of the plane Blume-Emery-Griffiths (BEG) model are analyzed using two transfer matrix approaches. The two methods and the domains of their applicability are discussed. The phase diagram is derived and compared with the one obtained by the position-space renormalization group (PSRG). The critical indices η i and conformal anomaly c are computed at Ising-like and Potts-like critical points and a good agreement with the conformal invariance predictions is found. A new, very effective method of estimating critical points is introduced and an attempt to estimate critical end points is also made.

  13. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... of the framework. The issue of commercial simulators or software providing the necessary features for product-process synthesis-design as opposed to their development by the academic PSE community will also be discussed. An example of a successful collaboration between academia-industry for the development...

  14. Alternative modeling methods for plasma-based Rf ion sources

    Energy Technology Data Exchange (ETDEWEB)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com; Beckwith, Kristian R. C., E-mail: beckwith@txcorp.com [Tech-X Corporation, Boulder, Colorado 80303 (United States)

    2016-02-15

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two

  15. Alternative modeling methods for plasma-based Rf ion sources

    Science.gov (United States)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  16. Models and methods for wind effect prediction; Modeller og metoder til prediktion af vindeffekt

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.

    1997-12-31

    In this report methods and models for predicting power produced by windmills, are considered. Several methods are suggested and investigated on actual observations of wind speed and the corresponding power. In order to improve the predictions meteorological forecasts are used in the formulation of the models. The methods applied cover non-parametric identification, least squares estimation and local regression. It was found that the meteorological forecasts significantly improved the predictions, and that a combination of non-parametric and parametric modelling, proved to be successful. (au) 38 refs.

  17. RF tunable devices and subsystems methods of modeling, analysis, and applications methods of modeling, analysis, and applications

    CERN Document Server

    Gu, Qizheng

    2015-01-01

    This book serves as a hands-on guide to RF tunable devices, circuits and subsystems. An innovative method of modeling for tunable devices and networks is described, along with a new tuning algorithm, adaptive matching network control approach, and novel filter frequency automatic control loop.  The author provides readers with the necessary background and methods for designing and developing tunable RF networks/circuits and tunable RF font-ends, with an emphasis on applications to cellular communications. ·      Discusses the methods of characterizing, modeling, analyzing, and applying RF tunable devices and subsystems; ·      Explains the necessary methods of utilizing RF tunable devices and subsystems, rather than discussing the RF tunable devices themselves; ·      Presents and applies methods for MEMS tunable capacitors, which can be used for any RF tunable device; ·      Uses analytic methods wherever possible and provides numerous, closed-form solutions; ·      Includ...

  18. IR Laboratory Astrophysics at Forty: Some Highlights and a Look to the Future

    Science.gov (United States)

    Allamandola, Louis John

    2016-06-01

    Space was thought to be chemically barren until about forty years ago. Astrochemistry was in its infancy, the composition of interstellar dust was largely guessed at, the presence of mixed molecular ices in dense molecular clouds was not taken seriously, and the notion of large, gas phase, carbon-rich molecules (PAHs) abundant and widespread throughout the interstellar medium (ISM) was inconceivable. The rapid development of infrared astronomy between 1970 and 1985, especially observations made by the Kuiper Airborne Observatory (KAO) and the Infrared Astronomical Satellite IRAS), which made it possible to measure mid-infrared spectra between 2.5 to 14 µm, changed all that. Since then observations made from ground-based, airborne and orbiting IR telescopes, together with radio and submm observations, have revealed that we live in a Universe that is not a hydrogen-dominated, physicist's paradise, but in a molecular Universe with complex molecules directly interwoven into its fabric. Today we recognize that molecules are an abundant and important component of astronomical objects at all stages of their evolution and that they play important roles in many processes that contribute to the structure and evolution of galaxies. Furthermore, many of these organic molecules are thought to be delivered to habitable planets such as Earth, and their composition may be related to the origin of life. Laboratory astrophysics has been key to making this great progress; progress which has only been made possible thanks to the close collaboration of laboratory experimentalists with astronomers and theoreticians. These collaborations are essential to meet the growing interdisciplinary challenges posed by astrophysics. This talk will touch on some of the milestones that have been reached in IR astrospectroscopy over the past four decades, focusing on the experimental work that revealed the widespread presence of interstellar PAHs and the composition of interstellar/precometary ices

  19. How to find home backwards? Navigation during rearward homing of Cataglyphis fortis desert ants.

    Science.gov (United States)

    Pfeffer, Sarah E; Wittlinger, Matthias

    2016-07-15

    Cataglyphis ants are renowned for their impressive navigation skills, which have been studied in numerous experiments during forward locomotion. However, the ants' navigational performance during backward homing when dragging large food loads has not been investigated until now. During backward locomotion, the odometer has to deal with unsteady motion and irregularities in inter-leg coordination. The legs' sensory feedback during backward walking is not just a simple reversal of the forward stepping movements: compared with forward homing, ants are facing towards the opposite direction during backward dragging. Hence, the compass system has to cope with a flipped celestial view (in terms of the polarization pattern and the position of the sun) and an inverted retinotopic image of the visual panorama and landmark environment. The same is true for wind and olfactory cues. In this study we analyze for the first time backward-homing ants and evaluate their navigational performance in channel and open field experiments. Backward-homing Cataglyphis fortis desert ants show remarkable similarities in the performance of homing compared with forward-walking ants. Despite the numerous challenges emerging for the navigational system during backward walking, we show that ants perform quite well in our experiments. Direction and distance gauging was comparable to that of the forward-walking control groups. Interestingly, we found that backward-homing ants often put down the food item and performed foodless search loops around the left food item. These search loops were mainly centred around the drop-off position (and not around the nest position), and increased in length the closer the ants came to their fictive nest site. © 2016. Published by The Company of Biologists Ltd.

  20. Differential growth forms of the sponge Biemna fortis govern the abundance of its associated brittle star Ophiactis modesta

    Science.gov (United States)

    Dahihande, Azraj S.; Thakur, Narsinh L.

    2017-08-01

    Marine intertidal regions are physically stressful habitats. In such an environment, facilitator species and positive interactions mitigate unfavorable conditions to the benefit of less tolerant organisms. In sponge-brittle star association, sponges effectively shelter brittle stars from biotic and abiotic stresses. The sponge, Biemna fortis (Topsent, 1897) was examined from two intertidal regions Anjuna and Mhapan along the Central West Coast of India for associated brittle star Ophiactis modesta (Brock, 1888) during 2013-2014. The study sites varied in suspended particulate matter (SPM). B. fortis at the high SPM habitat (Anjuna) had partially buried growth form and at the low SPM habitat (Mhapan) had massive growth form. O. modesta was abundantly associated with the massive growth form (50-259 individuals per 500 ml sponge) but rarely occurred in association with partially buried growth form (6-16 individuals per 500 ml sponge). In laboratory choice assay O. modesta showed equal preference to the chemical cues from both the growth forms of B. fortis. In addition, O. modesta showed significant preference to B. fortis compared to other sympatric sponges. These observations highlight the involvement of chemical cues in host recognition by O. modesta. Massive growth forms transplanted to the high SPM habitat were unable to survive but partially buried growth forms transplanted to the low SPM habitat were able to survive. Differential growth forms of the host sponge B. fortis at different abiotic stresses affect the abundance of the associated brittle star O. modesta.

  1. The Z3 model with the density of states method

    CERN Document Server

    Mercado, Ydalia Delgado; Gattringer, Christof

    2014-01-01

    In this contribution we apply a new variant of the density of states method to the Z3 spin model at finite density. We use restricted expectation values evaluated with Monte Carlo simulations and study their dependence on a control parameter lambda. We show that a sequence of one-parameter fits to the Monte-Carlo data as a function of lambda is sufficient to completely determine the density of states. We expect that this method has smaller statistical errors than other approaches since all generated Monte Carlo data are used in the determination of the density. We compare results for magnetization and susceptibility to a reference simulation in the dual representation of the Z3 spin model and find good agreement for a wide range of parameters.

  2. Modelling of Granular Materials Using the Discrete Element Method

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1997-01-01

    With the Discrete Element Method it is possible to model materials that consists of individual particles where a particle may role or slide on other particles. This is interesting because most of the deformation in granular materials is due to rolling or sliding rather that compression...... of the grains. This is true even of the resilient (or reversible) deformations. It is also interesting because the Discrete Element Method models resilient and plastic deformations as well as failure in a single process.The paper describes two types of calculations. One on a small sample of angular elements...... subjected to a pulsating (repeated) biaxial loading and another of a larger sample of circular element subjected to a plate load. Both cases are two dimensional, i.e. plane strain.The repeated biaxial loading showed a large increase in plastic strain for the first load pulse at a given load level...

  3. Models and Methods for Urban Power Distribution Network Planning

    Institute of Scientific and Technical Information of China (English)

    余贻鑫; 王成山; 葛少云; 肖俊; 严雪飞; 黄纯华

    2004-01-01

    The models, methods and their application experiences of a practical GIS(geographic information system)-based computer decision-making support system of urban power distribution network planning with seven subsystems, termed CNP, are described. In each subsystem there is at least one or one set of practical mathematical methobs. Some new models and mathematical methods have been introduced. In the development of GNP the idea of cognitive system engineering has been insisted on, which claims that human and computer intelligence should be combined together to solve the complex engineering problems cooperatively. Practical applications have shown that not only the optimal plan can be automatically reached with many complicated factors considered, but also the computation,analysis and graphic drawing burden can be released considerably.

  4. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  5. HyPEP FY06 Report: Models and Methods

    Energy Technology Data Exchange (ETDEWEB)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations and many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the

  6. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  7. Asymptotic-Preserving methods and multiscale models for plasma physics

    CERN Document Server

    Degond, Pierre

    2016-01-01

    The purpose of the present paper is to provide an overview of Asymptotic-Preserving methods for multiscale plasma simulations by addressing three singular perturbation problems. First, the quasi-neutral limit of fluid and kinetic models is investigated in the framework of non magnetized as well as magnetized plasmas. Second, the drift limit for fluid descriptions of thermal plasmas under large magnetic fields is addressed. Finally efficient numerical resolutions of anisotropic elliptic or diffusion equations arising in magnetized plasma simulation are reviewed.

  8. Unitary transformation method for solving generalized Jaynes-Cummings models

    Indian Academy of Sciences (India)

    Sudha Singh

    2006-03-01

    Two fully quantized generalized Jaynes-Cummings models for the interaction of a two-level atom with radiation field are treated, one involving intensity dependent coupling and the other involving multiphoton interaction between the field and the atom. The unitary transformation method presented here not only solves the time dependent problem but also allows a determination of the eigensolutions of the interacting Hamiltonian at the same time.

  9. Numerical methods used in fusion science numerical modeling

    Science.gov (United States)

    Yagi, M.

    2015-04-01

    The dynamics of burning plasma is very complicated physics, which is dominated by multi-scale and multi-physics phenomena. To understand such phenomena, numerical simulations are indispensable. Fundamentals of numerical methods used in fusion science numerical modeling are briefly discussed in this paper. In addition, the parallelization technique such as open multi processing (OpenMP) and message passing interface (MPI) parallel programing are introduced and the loop-level parallelization is shown as an example.

  10. Chebyshev super spectral viscosity method for a fluidized bed model

    CERN Document Server

    Sarra, S A

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations.

  11. Quality of Methods Reporting in Animal Models of Colitis

    Science.gov (United States)

    Bramhall, Michael; Flórez-Vargas, Oscar; Stevens, Robert; Brass, Andy

    2015-01-01

    Background: Current understanding of the onset of inflammatory bowel diseases relies heavily on data derived from animal models of colitis. However, the omission of information concerning the method used makes the interpretation of studies difficult or impossible. We assessed the current quality of methods reporting in 4 animal models of colitis that are used to inform clinical research into inflammatory bowel disease: dextran sulfate sodium, interleukin-10−/−, CD45RBhigh T cell transfer, and 2,4,6-trinitrobenzene sulfonic acid (TNBS). Methods: We performed a systematic review based on PRISMA guidelines, using a PubMed search (2000–2014) to obtain publications that used a microarray to describe gene expression in colitic tissue. Methods reporting quality was scored against a checklist of essential and desirable criteria. Results: Fifty-eight articles were identified and included in this review (29 dextran sulfate sodium, 15 interleukin-10−/−, 5 T cell transfer, and 16 TNBS; some articles use more than 1 colitis model). A mean of 81.7% (SD = ±7.038) of criteria were reported across all models. Only 1 of the 58 articles reported all essential criteria on our checklist. Animal age, gender, housing conditions, and mortality/morbidity were all poorly reported. Conclusions: Failure to include all essential criteria is a cause for concern; this failure can have large impact on the quality and replicability of published colitis experiments. We recommend adoption of our checklist as a requirement for publication to improve the quality, comparability, and standardization of colitis studies and will make interpretation and translation of data to human disease more reliable. PMID:25989337

  12. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  13. Regularization method for calibrated POD reduced-order models

    Directory of Open Access Journals (Sweden)

    El Majd Badr Abou

    2014-01-01

    Full Text Available In this work we present a regularization method to improve the accuracy of reduced-order models based on Proper Orthogonal Decomposition. The bench mark configuration retained corresponds to a case of relatively simple dynamics: a two-dimensional flow around a cylinder for a Reynolds number of 200. Finally, we show for this flow configuration that this procedure is efficient in term of reduction of errors.

  14. Models and methods for hot spot safety work

    OpenAIRE

    Vistisen, Dorte; Thyregod, Poul; LAURSEN, Jan Grubb

    2002-01-01

    Despite the fact that millions DKK each year are spent on improving roadsafety in Denmark, funds for traffic safety are limited. It is therefore vital to spend the resources as effectively as possible. This thesis is concerned with the area of traffic safety denoted "hot spot safety work", which is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety ...

  15. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  16. A Two-stage Polynomial Method for Spectrum Emissivity Modeling

    OpenAIRE

    Qiu, Qirong; Liu, Shi; Teng, Jing; Yan, Yong

    2015-01-01

    Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a two stages...

  17. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  18. Modeling Soil Water Retention Curve with a Fractal Method

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Many empirical models have been developed to describe the soil water retention curve (SWRC). In this study, a fractal model for SWRC was derived with a specially constructed Menger sponge to describe the fractal scaling behavior of soil; relationships were established among the fractal dimension of SWRC, the fractal dimension of soil mass, and soil texture; and the model was used to estimate SWRC with the estimated results being compared to experimental data for verification. The derived fractal model was in a power-law form, similar to the Brooks-Corey and Campbell empirical functions. Experimental data of particle size distribution (PSD), texture, and soil water retention for 10 soils collected at different places in China were used to estimate the fractal dimension of SWRC and the mass fractal dimension. The fractal dimension of SWRC and the mass fractal dimension were linearly related. Also, both of the fractal dimensions were dependent on soil texture, i.e., clay and sand contents. Expressions were proposed to quantify the relationships. Based on the relationships, four methods were used to determine the fractal dimension of SWRC and the model was applied to estimate soil water content at a wide range of tension values. The estimated results compared well with the measured data having relative errors less than 10% for over 60% of the measurements. Thus, this model, estimating the fractal dimension using soil textural data, offered an alternative for predicting SWRC.

  19. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    Science.gov (United States)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  20. Sparse aerosol models beyond the quadrature method of moments

    Science.gov (United States)

    McGraw, Robert

    2013-05-01

    This study examines a class of sparse aerosol models derived from linear programming (LP). The widely used quadrature method of moments (QMOM) is shown to fall into this class. Here it is shown how other sparse aerosol models can be constructed, which are not based on moments of the particle size distribution. The new methods enable one to bound atmospheric aerosol physical and optical properties using arbitrary combinations of model parameters and measurements. Rigorous upper and lower bounds, e.g. on the number of aerosol particles that can activate to form cloud droplets, can be obtained this way from measurement constraints that may include total particle number concentration and size distribution moments. The new LP-based methods allow a much wider range of aerosol properties, such as light backscatter or extinction coefficient, which are not easily connected to particle size moments, to also be assimilated into a list of constraints. Finally, it is shown that many of these more general aerosol properties can be tracked directly in an aerosol dynamics simulation, using SAMs, in much the same way that moments are tracked directly in the QMOM.

  1. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  2. A New Method of Comparing Forcing Agents in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.; Jarvis, Andrew

    2015-10-14

    We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approach for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.

  3. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  4. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  5. Modeling Barrier Tissues In Vitro: Methods, Achievements, and Challenges

    Directory of Open Access Journals (Sweden)

    Courtney M. Sakolish

    2016-03-01

    Full Text Available Organ-on-a-chip devices have gained attention in the field of in vitro modeling due to their superior ability in recapitulating tissue environments compared to traditional multiwell methods. These constructed growth environments support tissue differentiation and mimic tissue–tissue, tissue–liquid, and tissue–air interfaces in a variety of conditions. By closely simulating the in vivo biochemical and biomechanical environment, it is possible to study human physiology in an organ-specific context and create more accurate models of healthy and diseased tissues, allowing for observations in disease progression and treatment. These chip devices have the ability to help direct, and perhaps in the distant future even replace animal-based drug efficacy and toxicity studies, which have questionable relevance to human physiology. Here, we review recent developments in the in vitro modeling of barrier tissue interfaces with a focus on the use of novel and complex microfluidic device platforms.

  6. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  7. Methods to model-check parallel systems software.

    Energy Technology Data Exchange (ETDEWEB)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-12-15

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD.

  8. A Method for Modeling of Floating Vertical Axis Wind Turbine

    DEFF Research Database (Denmark)

    Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir

    2013-01-01

    . In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine......It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... This integrated dynamic model takes into account the wind inflow, aerodynamics, hydrodynamics, structural dynamics (wind turbine, floating platform and the mooring lines) and a generator control. This approach calculates dynamic equilibrium at each time step and takes account of the interaction between the rotor...

  9. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    Science.gov (United States)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  10. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  11. Polarized Molecular Orbital Model Chemistry. II. The PMO Method.

    Science.gov (United States)

    Zhang, Peng; Fiedler, Luke; Leverentz, Hannah R; Truhlar, Donald G; Gao, Jiali

    2011-04-12

    We present a new semiempirical molecular orbital method based on neglect of diatomic differential overlap. This method differs from previous NDDO-based methods in that we include p orbitals on hydrogen atoms to provide a more realistic modeling of polarizability. As in AM1-D and PM3-D, we also include damped dispersion. The formalism is based on the original MNDO one, but in the process of parameterization we make some specific changes to some of the functional forms. The present article is a demonstration of the capability of the new approach, and it presents a successful parametrization for compounds composed only of hydrogen and oxygen atoms, including the important case of water clusters.

  12. An updating method for structural dynamics models with unknown excitations

    Energy Technology Data Exchange (ETDEWEB)

    Louf, F; Charbonnel, P E; Ladeveze, P [LMT-Cachan (ENS Cachan/CNRS/Paris 6 University) 61, avenue du Prsident Wilson, F-94235 Cachan Cedex (France); Gratien, C [Astrium (EADS space transportation) - Service TE 343 66, Route de Verneuil, 78133 Les Mureaux Cedex (France)], E-mail: charbonnel@lmt.ens-cachan.fr, E-mail: ladeveze@lmt.ens-cachan.fr, E-mail: louf@lmt.ens-cachan.fr, E-mail: christian.gratien@astrium.eads.net

    2008-11-01

    This paper presents an extension of the Constitutive Relation Error (CRE) updating method to complex industrial structures, such as space launchers, for which tests carried out in the functional context can provide significant amounts of information. Indeed, since several sources of excitation are involved simultaneously, a flight test can be viewed as a multiple test. However, there is a serious difficulty in that these sources of excitation are partially unknown. The CRE updating method enables one to obtain an estimate of these excitations. We present a first application of the method using a very simple finite element model of the Ariane V launcher along with measurements performed at the end of an atmospheric flight.

  13. Inertisation options for BG method and optimisation using CFD modelling

    Institute of Scientific and Technical Information of China (English)

    Morla Ramakrishna; Balusu Rao; Tanguturi Krishna; Ting Ren

    2015-01-01

    Spontaneous combustion (sponcom) is one of the issues of concern with the blasting gallery (BG) method of coal mining and has the potential to cause fires, and impact on production and safety, greenhouse gas (GHG) emissions and huge costs involved in controlling the aftermath situations. Some of the research attempts made to prevent and control coal mine fires and spontaneous combustion in thick seams worked with bord and pillar mining methods are presented in this paper. In the study, computational fluid dynamics (CFD) modelling techniques were used to simulate and assess the effects of various mining methods, layouts, designs, and different operational and ventilation parameters on the flow of goaf gases in BG panels. A wide range of parametric studies were conducted to develop proactive strategies to control and prevent ingress of oxygen into the goaf area preventing spontaneous combustion and mine fires.

  14. Intersecting dilated convex polyhedra method for modeling complex particles in discrete element method

    Science.gov (United States)

    Nye, Ben; Kulchitsky, Anton V; Johnson, Jerome B

    2014-01-01

    This paper describes a new method for representing concave polyhedral particles in a discrete element method as unions of convex dilated polyhedra. This method offers an efficient way to simulate systems with a large number of (generally concave) polyhedral particles. The method also allows spheres, capsules, and dilated triangles to be combined with polyhedra using the same approach. The computational efficiency of the method is tested in two different simulation setups using different efficiency metrics for seven particle types: spheres, clusters of three spheres, clusters of four spheres, tetrahedra, cubes, unions of two octahedra (concave), and a model of a computer tomography scan of a lunar simulant GRC-3 particle. It is shown that the computational efficiency of the simulations degrades much slower than the increase in complexity of the particles in the system. The efficiency of the method is based on the time coherence of the system, and an efficient and robust distance computation method between polyhedra as particles never intersect for dilated particles. PMID:26300584

  15. IMPROVED NUMERICAL METHODS FOR MODELING RIVER-AQUIFER INTERACTION.

    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll; Sue Tillery; Phillip King

    2008-09-01

    A new option for Local Time-Stepping (LTS) was developed to use in conjunction with the multiple-refined-area grid capability of the U.S. Geological Survey's (USGS) groundwater modeling program, MODFLOW-LGR (MF-LGR). The LTS option allows each local, refined-area grid to simulate multiple stress periods within each stress period of a coarser, regional grid. This option is an alternative to the current method of MF-LGR whereby the refined grids are required to have the same stress period and time-step structure as the coarse grid. The MF-LGR method for simulating multiple-refined grids essentially defines each grid as a complete model, then for each coarse grid time-step, iteratively runs each model until the head and flux changes at the interfacing boundaries of the models are less than some specified tolerances. Use of the LTS option is illustrated in two hypothetical test cases consisting of a dual well pumping system and a hydraulically connected stream-aquifer system, and one field application. Each of the hypothetical test cases was simulated with multiple scenarios including an LTS scenario, which combined a monthly stress period for a coarse grid model with a daily stress period for a refined grid model. The other scenarios simulated various combinations of grid spacing and temporal refinement using standard MODFLOW model constructs. The field application simulated an irrigated corridor along the Lower Rio Grande River in New Mexico, with refinement of a small agricultural area in the irrigated corridor.The results from the LTS scenarios for the hypothetical test cases closely replicated the results from the true scenarios in the refined areas of interest. The head errors of the LTS scenarios were much smaller than from the other scenarios in relation to the true solution, and the run times for the LTS models were three to six times faster than the true models for the dual well and stream-aquifer test cases, respectively. The results of the field

  16. Epidemiologic trends in chronic renal replacement therapy over forty years: A Swiss dialysis experience

    Directory of Open Access Journals (Sweden)

    Lehmann Petra

    2012-07-01

    Full Text Available Abstract Background Long term longitudinal data are scarce on epidemiological characteristics and patient outcomes in patients on maintenance dialysis, especially in Switzerland. We examined changes in epidemiology of patients undergoing renal replacement therapy by either hemodialysis or peritoneal dialysis over four decades. Methods Single center retrospective study including all patients which initiated dialysis treatment for ESRD between 1970 and 2008. Analyses were performed for subgroups according to dialysis vintage, based on stratification into quartiles of date of first treatment. A multivariate model predicting death and survival time, using time-dependent Cox regression, was developed. Results 964 patients were investigated. Incident mean age progressively increased from 48 ± 14 to 64 ± 15 years from 1st to 4th quartile (p  Discussion We document an increase of a predominantly elderly incident and prevalent dialysis population, with progressively shortened survival after initiation of renal replacement over four decades, and, nevertheless, a prolonged lifespan. Analysis of the data is limited by lack of information on comorbidity in the study population. Conclusions Survival in patients on renal replacement therapy seems to be affected not only by medical and technical advances in dialysis therapy, but may mostly reflect progressively lower mortality of individuals with cardiovascular and metabolic complications, as well as a policy of accepting older and polymorbid patients for dialysis in more recent times. This is relevant to make demographic predictions in face of the ESRD epidemic nephrologists and policy makers are facing in industrialized countries.

  17. Forty Years of Forensic Interviewing of Children Suspected of Sexual Abuse, 1974–2014: Historical Benchmarks

    Directory of Open Access Journals (Sweden)

    Kathleen Coulborn Faller

    2014-12-01

    Full Text Available This article describes the evolution of forensic interviewing as a method to determine whether or not a child has been sexually abused, focusing primarily on the United States. It notes that forensic interviewing practices are challenged to successfully identify children who have been sexually abused and successfully exclude children who have not been sexually abused. It describes models for child sexual abuse investigation, early writings and practices related to child interviews, and the development of forensic interview structures from scripted, to semi-structured, to flexible. The article discusses the controversies related appropriate questions and the use of media (e.g., anatomical dolls and drawings. It summarizes the characteristics of four important interview structures and describes their impact of the field of forensic interviewing. The article describes forensic interview training and the challenge of implementing training in forensic practice. The article concludes with a summary of progress and remaining controversies and with future challenges for the field of forensic interviewing.

  18. Modeling of Unsteady Flow through the Canals by Semiexact Method

    Directory of Open Access Journals (Sweden)

    Farshad Ehsani

    2014-01-01

    Full Text Available The study of free-surface and pressurized water flows in channels has many interesting application, one of the most important being the modeling of the phenomena in the area of natural water systems (rivers, estuaries as well as in that of man-made systems (canals, pipes. For the development of major river engineering projects, such as flood prevention and flood control, there is an essential need to have an instrument that be able to model and predict the consequences of any possible phenomenon on the environment and in particular the new hydraulic characteristics of the system. The basic equations expressing hydraulic principles were formulated in the 19th century by Barre de Saint Venant and Valentin Joseph Boussinesq. The original hydraulic model of the Saint Venant equations is written in the form of a system of two partial differential equations and it is derived under the assumption that the flow is one-dimensional, the cross-sectional velocity is uniform, the streamline curvature is small and the pressure distribution is hydrostatic. The St. Venant equations must be solved with continuity equation at the same time. Until now no analytical solution for Saint Venant equations is presented. In this paper the Saint Venant equations and continuity equation are solved with homotopy perturbation method (HPM and comparison by explicit forward finite difference method (FDM. For decreasing the present error between HPM and FDM, the st.venant equations and continuity equation are solved by HAM. The homotopy analysis method (HAM contains the auxiliary parameter ħ that allows us to adjust and control the convergence region of solution series. The study has highlighted the efficiency and capability of HAM in solving Saint Venant equations and modeling of unsteady flow through the rectangular canal that is the goal of this paper and other kinds of canals.

  19. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    Science.gov (United States)

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  20. Variational methods to estimate terrestrial ecosystem model parameters

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  1. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  2. A decomposition method based on a model of continuous change.

    Science.gov (United States)

    Horiuchi, Shiro; Wilmoth, John R; Pletcher, Scott D

    2008-11-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period.

  3. A unified view of generative models for networks: models, methods, opportunities, and challenges

    CERN Document Server

    Jacobs, Abigail Z

    2014-01-01

    Research on probabilistic models of networks now spans a wide variety of fields, including physics, sociology, biology, statistics, and machine learning. These efforts have produced a diverse ecology of models and methods. Despite this diversity, many of these models share a common underlying structure: pairwise interactions (edges) are generated with probability conditional on latent vertex attributes. Differences between models generally stem from different philosophical choices about how to learn from data or different empirically-motivated goals. The highly interdisciplinary nature of work on these generative models, however, has inhibited the development of a unified view of their similarities and differences. For instance, novel theoretical models and optimization techniques developed in machine learning are largely unknown within the social and biological sciences, which have instead emphasized model interpretability. Here, we describe a unified view of generative models for networks that draws togethe...

  4. Method of Numerical Modeling for Constitutive Relations of Clay

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to study the method of numerical modeling for constitutive relations of clay, on the basis of the principle of interaction between plastic volumetric strain and plastic generalized shear strain, the two constitutive functionals that include the function of stress path were used as the basic framework of the constitutive model, which are able to demonstrate the dependence of stress path.The two partial differential cross terms appear in the expression of stress-strain increment relation, which are used to demonstrate the interaction between plastic volumetric strain and plastic generalized shear strain.The elasoplastic constitutive models of clay under two kinds of stress paths, CTC and TC, have been constructed using the triaxial test results.The three basic characteristics of deformation of soils, pressure sensitivity, dilatancy, and dependence of stress path, are well explained using these two models.Using visualization, the three-dimensional surfaces of shear and volume strains in the whole stress field under stress paths of CTC and TC are given.In addition, the two families of shear and volumetric yield loci under CTC and TC paths are plotted respectively.By comparing the results of deformation under these two stress paths, it has been found that, there are obvious differences in the strain peaks, the shapes of strain surfaces, and the trends of variation of volumetric yield loci, however both families of shear yield loci are similar.These results demonstrate that the influences of stress path on the constitutive relations of clay are considerably large and not negligible.The numerical modeling method that can sufficiently reflect the dependence of stress path is superior to the traditional one.

  5. Spinal posture and pelvic position in three hundred forty-five elementary school children: a rasterstereographic pilot study

    Directory of Open Access Journals (Sweden)

    Thimm Christoph Furian

    2013-03-01

    Full Text Available Children’s posture has been of growing concern due to observations that it seems to be impaired compared to previous generations. So far there is no reference data for spinal posture and pelvic position in healthy children available. Purpose of this pilot study was to determine rasterstereographic posture values in children during their second growth phase. Three hundred and forty-five pupils were measured with a rasterstereographic device in a neutral standing position with hanging arms. To further analyse for changes in spinal posture during growth, the children were divided into 12-month age clusters. A mean kyphotic angle of 47.1°±7.5 and a mean lordotic angle of 42.1°±9.9 were measured. Trunk imbalance in girls (5.85 mm±0.74 and boys (7.48 mm± 0.83 varied only little between the age groups, with boys showing slightly higher values than girls. The trunk inclination did not show any significant differences between the age groups in boys or girls. Girls’ inclination was 2.53°±1.96 with a tendency to decreasing angles by age, therefore slightly smaller compared to boys (2.98°±2.18. Lateral deviation (4.8 mm and pelvic position (tilt: 2.75 mm; torsion: 1.53°; inclination: 19.8°±19.8 were comparable for all age groups and genders. This study provides the first systematic rasterstereographic analysis of spinal posture in children between 6 and 11 years. With the method of rasterstereography a reliable three-dimensional analysis of spinal posture and pelvic position is possible. Spinal posture and pelvic position does not change significantly with increasing age in this collective of children during the second growth phase.

  6. Optimization methods and silicon solar cell numerical models

    Science.gov (United States)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  7. AUTONOMOUS CT REPLACEMENT METHOD FOR THE SKULL PROSTHESIS MODELLING

    Directory of Open Access Journals (Sweden)

    Marcelo Rudek

    2015-12-01

    Full Text Available The geometric modeling of prosthesis is a complex task from medical and engineering viewpoint. A method based on CT replacement is proposed in order to circumvent the related problems with the missing information to modeling. The method is based on digital image processing and swarm intelligence algorithm. In this approach, a missing region on the defective skull is represented by curvature descriptors. The main function of the descriptors is to simplify the skull’s contour geometry; and they are defined from the Cubic Bezier Curves using a meta-heuristic process for parameter’s estimation. The Artificial Bee Colony (ABC optimization technique is applied in order to evaluate the best solution. The descriptors from a defective CT slice image are the searching parameters in medical image databases, and a similar image, i.e. with similar descriptors, can be retrieval and used to replace the defective slice. Thus, a prosthesis piece is automatically modeled with information extracted from distinct skulls with similar anatomical characteristics.

  8. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  9. Dynamic airspace configuration method based on a weighted graph model

    Institute of Scientific and Technical Information of China (English)

    Chen Yangzhou; Zhang Defu

    2014-01-01

    This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph par-titioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm trans-fers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is com-pleted by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connec-tivity, as well as minimum distance constraint.

  10. A Successive Selection Method for finite element model updating

    Science.gov (United States)

    Gou, Baiyong; Zhang, Weijie; Lu, Qiuhai; Wang, Bo

    2016-03-01

    Finite Element (FE) model can be updated effectively and efficiently by using the Response Surface Method (RSM). However, it often involves performance trade-offs such as high computational cost for better accuracy or loss of efficiency for lots of design parameter updates. This paper proposes a Successive Selection Method (SSM), which is based on the linear Response Surface (RS) function and orthogonal design. SSM rewrites the linear RS function into a number of linear equations to adjust the Design of Experiment (DOE) after every FE calculation. SSM aims to interpret the implicit information provided by the FE analysis, to locate the Design of Experiment (DOE) points more quickly and accurately, and thereby to alleviate the computational burden. This paper introduces the SSM and its application, describes the solution steps of point selection for DOE in detail, and analyzes SSM's high efficiency and accuracy in the FE model updating. A numerical example of a simply supported beam and a practical example of a vehicle brake disc show that the SSM can provide higher speed and precision in FE model updating for engineering problems than traditional RSM.

  11. Block-Krylov component synthesis method for structural model reduction

    Science.gov (United States)

    Craig, Roy R., Jr.; Hale, Arthur L.

    1988-01-01

    A new analytical method is presented for generating component shape vectors, or Ritz vectors, for use in component synthesis. Based on the concept of a block-Krylov subspace, easily derived recurrence relations generate blocks of Ritz vectors for each component. The subspace spanned by the Ritz vectors is called a block-Krylov subspace. The synthesis uses the new Ritz vectors rather than component normal modes to reduce the order of large, finite-element component models. An advantage of the Ritz vectors is that they involve significantly less computation than component normal modes. Both 'free-interface' and 'fixed-interface' component models are derived. They yield block-Krylov formulations paralleling the concepts of free-interface and fixed-interface component modal synthesis. Additionally, block-Krylov reduced-order component models are shown to have special disturbability/observability properties. Consequently, the method is attractive in active structural control applications, such as large space structures. The new fixed-interface methodology is demonstrated by a numerical example. The accuracy is found to be comparable to that of fixed-interface component modal synthesis.

  12. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  13. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  14. The forty years of vermicular graphite cast iron development in China (Part Ⅱ)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ 2 Manufacturing methods and vermicularisers 2.1 Manufacturing methods From the long period of practical experiences of VGCI production, the following four points are very important for the successful production of VGCI:

  15. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  16. Modeling patient safety incidents knowledge with the Categorial Structure method.

    Science.gov (United States)

    Souvignet, Julien; Bousquet, Cédric; Lewalle, Pierre; Trombert-Paviot, Béatrice; Rodrigues, Jean Marie

    2011-01-01

    Following the WHO initiative named World Alliance for Patient Safety (PS) launched in 2004 a conceptual framework developed by PS national reporting experts has summarized the knowledge available. As a second step, the Department of Public Health of the University of Saint Etienne team elaborated a Categorial Structure (a semi formal structure not related to an upper level ontology) identifying the elements of the semantic structure underpinning the broad concepts contained in the framework for patient safety. This knowledge engineering method has been developed to enable modeling patient safety information as a prerequisite for subsequent full ontology development. The present article describes the semantic dissection of the concepts, the elicitation of the ontology requirements and the domain constraints of the conceptual framework. This ontology includes 134 concepts and 25 distinct relations and will serve as basis for an Information Model for Patient Safety.

  17. A model for explaining fusion suppression using classical trajectory method

    Science.gov (United States)

    Phookan, C. K.; Kalita, K.

    2015-01-01

    We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.

  18. A model for explaining fusion suppression using classical trajectory method

    Directory of Open Access Journals (Sweden)

    Phookan C. K.

    2015-01-01

    Full Text Available We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.

  19. Computational methods of the Advanced Fluid Dynamics Model

    Energy Technology Data Exchange (ETDEWEB)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  20. GEOMETRIC METHOD OF SEQUENTIAL ESTIMATION RELATED TO MULTINOMIAL DISTRIBUTION MODELS

    Institute of Scientific and Technical Information of China (English)

    WEIBOCHENG; LISHOUYE

    1995-01-01

    In 1980's differential geometric methods are successfully used to study curved expomential families and normal nonlinear regression models.This paper presents a new geometric structure to study multinomial distribution models which contain a set of nonlinear parameters.Based on this geometric structure,the suthors study several asymptotic properties for sequential estimation.The bias,the variance and the information loss of the sequential estimates are given from geomentric viewpoint,and a limit theorem connected with the observed and expected Fisher information is obtained in terms of curvatvre measures.The results show that the sequential estimation procednce has some better properties which are generally impossible for nonsequential estimation procedures.

  1. Unsteady aerodynamic modeling based on POD-observer method

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A new hybrid approach to constructing reduced-order models(ROM)of unsteady aerodynamics applicable to aeroelastic analysis is presented by using proper orthogonal decomposition(POD)in combination with observer techniques.Fluid modes are generated through POD by sampling observations of solutions derived from the full-order model.The response in the POD training is projected onto the fluid modes to determine the time history of the modal amplitudes.The resulting data are used to extract the Markov parameters of the low-dimensional model for modal amplitudes using a related deadbeat observer.The state-space realization is synthesized from the system’s Markov parameters that are processed with the eigensystem realization algorithm.The POD-observer method is applied to a two-dimensional airfoil system in subsonic flow field.The results predicted by the ROM are in general agreement with those from the full-order system.The ROM obtained by the hybrid approach captures the essence of a fluid system and produces vast reduction in both degrees of freedom and computational time relative to the full-order model.

  2. Methods for the Update and Verification of Forest Surface Model

    Science.gov (United States)

    Rybansky, M.; Brenova, M.; Zerzan, P.; Simon, J.; Mikita, T.

    2016-06-01

    The digital terrain model (DTM) represents the bare ground earth's surface without any objects like vegetation and buildings. In contrast to a DTM, Digital surface model (DSM) represents the earth's surface including all objects on it. The DTM mostly does not change as frequently as the DSM. The most important changes of the DSM are in the forest areas due to the vegetation growth. Using the LIDAR technology the canopy height model (CHM) is obtained by subtracting the DTM and the corresponding DSM. The DSM is calculated from the first pulse echo and DTM from the last pulse echo data. The main problem of the DSM and CHM data using is the actuality of the airborne laser scanning. This paper describes the method of calculating the CHM and DSM data changes using the relations between the canopy height and age of trees. To get a present basic reference data model of the canopy height, the photogrammetric and trigonometric measurements of single trees were used. Comparing the heights of corresponding trees on the aerial photographs of various ages, the statistical sets of the tree growth rate were obtained. These statistical data and LIDAR data were compared with the growth curve of the spruce forest, which corresponds to a similar natural environment (soil quality, climate characteristics, geographic location, etc.) to get the updating characteristics.

  3. An elastic mechanics model and computation method for geotechnical material

    Institute of Scientific and Technical Information of China (English)

    Zheng Yingren; Gao Hong; Zheng Lushi

    2010-01-01

    Internal friction characteristic is one of the basic properties of geotechnical materials and it exists in mechanical elements all the time.However,until now internal friction is only considered in limit analysis and plastic mechanics but not included in elastic theory for rocks and soils.We consider that internal friction exists in both elastic state and plastic state of geotechnical materials,so the mechanical unit of friction material is constituted.Based on study results of soil tests,the paper also proposes that cohesion takes effect first and internal friction works gradually with the increment of deformation.By assuming that the friction coefficient is proportional to the strain,the internal friction is computed.At last,by imitating the linear elastic mechanics,the nonlinear elastic mechanics model of friction material is established,where the shear modulus G is not a constant.The new model and the traditional elastic model are used simultaneously to analyze an elastic foundation.The results indicate that the displacements computed by the new model are less than those from the traditional method,which agrees with the fact and shows that the mechanical units of friction material are suitable for geotechnical material.

  4. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  5. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  6. Modelling a gamma irradiation process using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  7. A compound compensation method for car-following model

    Science.gov (United States)

    Zhu, Wen-Xing; Jun, Du; Zhang, Li-Dong

    2016-10-01

    A compound compensation mechanism was introduced into the car-following system. Two basic compensation methods were combined to generate a compound control strategy to improve the performance of the traffic flow system. The compensation effect was analyzed with unit step response in time domain and bode diagram in frequency domain, respectively. Two lemmas and one theorem were proved with the use of Routh criteria and small gain theorem. Numerical simulations were conducted in two situations under three types of condition. The simulation results verify the truth that with the increasing compensation parameters the stability of the car-following system is strengthened. It is shown that numerical results are in accordance with analytical results. In general, the performance of car-following model can be improved with an exterior control method.

  8. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    , possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...... operation constraints, while the third approach uses nonlinear programming. In the present case the non-linearity occurs in the boiler efficiency of power plants and the cv-value of an extraction plant. The linear programming model is used as a benchmark, as this type is frequently used, and has the lowest...

  9. An Improved Model Facet Method to Support EA Alignment

    Directory of Open Access Journals (Sweden)

    Jonathan Pepin

    2016-12-01

    Full Text Available Information System evolution requires a well-structured Enterprise Architecture and its rigorous management. The alignment of the elements in the architecture according to various abstraction layers may contribute to the management but appropriate tools are needed. We propose improvements to the Facet technique and we develop accompanying tools to master the difficulties of the alignment of the models used to structure an Enterprise Architecture. This technique has been experimented on many real life cases to demonstrate the effectiveness of our EA alignment method. The tools are already integrated in the Eclipse EMF Facet project.

  10. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  11. Convergence of a finite difference method for combustion model problems

    Institute of Scientific and Technical Information of China (English)

    YING; Long'an

    2004-01-01

    We study a finite difference scheme for a combustion model problem. A projection scheme near the combustion wave, and the standard upwind finite difference scheme away from the combustion wave are applied. Convergence to weak solutions with a combustion wave is proved under the normal Courant-Friedrichs-Lewy condition. Some conditions on the ignition temperature are given to guarantee the solution containing a strong detonation wave or a weak detonation wave. Convergence to strong detonation wave solutions for the random projection method is also proved.

  12. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  13. Implementation of splitting methods for air pollution modeling

    Directory of Open Access Journals (Sweden)

    M. Schlegel

    2011-11-01

    Full Text Available Explicit time integration methods are characterized by a small numerical effort per time step. In the application to multiscale problems in atmospheric modeling, this benefit is often more than compensated by stability problems and step size restrictions resulting from stiff chemical reaction terms and from a locally varying Courant-Friedrichs-Lewy (CFL condition for the advection terms. Splitting methods may be applied to efficiently combine implicit and explicit methods (IMEX splitting. Complementarily multirate time integration schemes allow for a local adaptation of the time step size to the grid size. In combination these approaches lead to schemes which are efficient in terms of evaluations of the right hand side. Special challenges arise when these methods are to be implemented. For an efficient implementation it is crucial to locate and exploit redundancies. Furthermore the more complex program flow may lead to computational overhead which in the worst case more than compensates the theoretical gain in efficiency. We present a general splitting approach which allows both for IMEX splittings and for local time step adaptation. The main focus is on an efficient implementation of this approach for parallel computation on computer clusters.

  14. An uncertain multidisciplinary design optimization method using interval convex models

    Science.gov (United States)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  15. A Probabilistic Recommendation Method Inspired by Latent Dirichlet Allocation Model

    Directory of Open Access Journals (Sweden)

    WenBo Xie

    2014-01-01

    Full Text Available The recent decade has witnessed an increasing popularity of recommendation systems, which help users acquire relevant knowledge, commodities, and services from an overwhelming information ocean on the Internet. Latent Dirichlet Allocation (LDA, originally presented as a graphical model for text topic discovery, now has found its application in many other disciplines. In this paper, we propose an LDA-inspired probabilistic recommendation method by taking the user-item collecting behavior as a two-step process: every user first becomes a member of one latent user-group at a certain probability and each user-group will then collect various items with different probabilities. Gibbs sampling is employed to approximate all the probabilities in the two-step process. The experiment results on three real-world data sets MovieLens, Netflix, and Last.fm show that our method exhibits a competitive performance on precision, coverage, and diversity in comparison with the other four typical recommendation methods. Moreover, we present an approximate strategy to reduce the computing complexity of our method with a slight degradation of the performance.

  16. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  17. A novel method to establish a rat ED model using internal iliac artery ligation combined with hyperlipidemia.

    Directory of Open Access Journals (Sweden)

    Chao Hu

    Full Text Available OBJECTIVE: To investigate a novel method, namely using bilateral internal iliac artery ligation combined with a high-fat diet (BCH, for establishing a rat model of erectile dysfunction (ED that, compared to classical approaches, more closely mimics the chronic pathophysiology of human ED after acute ischemic insult. MATERIALS AND METHODS: Forty 4-month-old male Sprague Dawley rats were randomly placed into five groups (n = 8 per group: normal control (NC, bilateral internal iliac artery ligation (BIIAL, high-fat diet (HFD, BCH, and mock surgery (MS. All rats were induced for 12 weeks. Copulatory behavior, intracavernosal pressure (ICP, ICP/mean arterial pressure, hematoxylin-eosin staining, Masson's trichrome staining, serum lipid levels, and endothelial and neuronal nitric oxide synthase immunohistochemical staining of the cavernous smooth muscle and endothelium were assessed. Data were analyzed by SAS 8.0 for Windows. RESULTS: Serum total cholesterol and triglyceride levels were significantly higher in the HFD and BCH groups than the NC and MS groups. High density lipoprotein levels were significantly lower in the HFD and BCH groups than the NC and MS groups. The ICP values and mount and intromission numbers were significantly lower in the BIIAL, HFD, and BCH groups than in the NC and MS groups. ICP was significantly lower in the BCH group than in the BIIAL and HFD groups. Cavernous smooth muscle and endothelial damage increased in the HFD and BCH groups. Cavernous smooth muscle to collagen ratio, nNOS and eNOS staining decreased significantly in the BIIAL, HFD, and BCH groups compared to the NC and MS groups. CONCLUSIONS: The novel BCH model mimics the chronic pathophysiology of ED in humans and avoids the drawbacks of traditional ED models.

  18. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  19. Approximation by randomly weighting method in censored regression model

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Censored regression ("Tobit") models have been in common use, and their linear hypothesis testings have been widely studied. However, the critical values of these tests are usually related to quantities of an unknown error distribution and estimators of nuisance parameters. In this paper, we propose a randomly weighting test statistic and take its conditional distribution as an approximation to null distribution of the test statistic. It is shown that, under both the null and local alternative hypotheses, conditionally asymptotic distribution of the randomly weighting test statistic is the same as the null distribution of the test statistic. Therefore, the critical values of the test statistic can be obtained by randomly weighting method without estimating the nuisance parameters. At the same time, we also achieve the weak consistency and asymptotic normality of the randomly weighting least absolute deviation estimate in censored regression model. Simulation studies illustrate that the per-formance of our proposed resampling test method is better than that of central chi-square distribution under the null hypothesis.

  20. Approximation by randomly weighting method in censored regression model

    Institute of Scientific and Technical Information of China (English)

    WANG ZhanFeng; WU YaoHua; ZHAO LinCheng

    2009-01-01

    Censored regression ("Tobit") models have been in common use,and their linear hypothesis testings have been widely studied.However,the critical values of these tests are usually related to quantities of an unknown error distribution and estimators of nuisance parameters.In this paper,we propose a randomly weighting test statistic and take its conditional distribution as an approximation to null distribution of the test statistic.It is shown that,under both the null and local alternative hypotheses,conditionally asymptotic distribution of the randomly weighting test statistic is the same as the null distribution of the test statistic.Therefore,the critical values of the test statistic can be obtained by randomly weighting method without estimating the nuisance parameters.At the same time,we also achieve the weak consistency and asymptotic normality of the randomly weighting least absolute deviation estimate in censored regression model.Simulation studies illustrate that the performance of our proposed resampling test method is better than that of central chi-square distribution under the null hypothesis.

  1. Optimization methods and silicon solar cell numerical models

    Science.gov (United States)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  2. Methodical fitting for mathematical models of rubber-like materials

    Science.gov (United States)

    Destrade, Michel; Saccomandi, Giuseppe; Sgura, Ivonne

    2017-02-01

    A great variety of models can describe the nonlinear response of rubber to uniaxial tension. Yet an in-depth understanding of the successive stages of large extension is still lacking. We show that the response can be broken down in three steps, which we delineate by relying on a simple formatting of the data, the so-called Mooney plot transform. First, the small-to-moderate regime, where the polymeric chains unfold easily and the Mooney plot is almost linear. Second, the strain-hardening regime, where blobs of bundled chains unfold to stiffen the response in correspondence to the `upturn' of the Mooney plot. Third, the limiting-chain regime, with a sharp stiffening occurring as the chains extend towards their limit. We provide strain-energy functions with terms accounting for each stage that (i) give an accurate local and then global fitting of the data; (ii) are consistent with weak nonlinear elasticity theory and (iii) can be interpreted in the framework of statistical mechanics. We apply our method to Treloar's classical experimental data and also to some more recent data. Our method not only provides models that describe the experimental data with a very low quantitative relative error, but also shows that the theory of nonlinear elasticity is much more robust that seemed at first sight.

  3. A global parallel model based design of experiments method to minimize model output uncertainty.

    Science.gov (United States)

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  4. Use of the method of Boolean models to study different methods of drilling wells of large diameter

    Energy Technology Data Exchange (ETDEWEB)

    Selivanov, A.N.; Ryabinin, A.I.

    1981-01-01

    A description is made of the method of statistical modeling using the method of Boolean models as an example of processing and analyzing the results of drilling wells of large diameter by different methods. It is indicated that this method as compared to traditional methods of multiple-factor analysis with comparatively small volume of experimental data makes it possible to obtain a complete qualitative and quantitative characterization of the studied phenomenon.

  5. A copula method for modeling directional dependence of genes

    Directory of Open Access Journals (Sweden)

    Park Changyi

    2008-05-01

    Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared

  6. Three dimensional wavefield modeling using the pseudospectral method; Pseudospectral ho ni yoru sanjigen hadoba modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T.; Matsuoka, T. [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T. [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.

  7. A new CFD modeling method for flow blockage accident investigations

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Wenyuan, E-mail: fanwy@mail.ustc.edu.cn; Peng, Changhong, E-mail: pengch@ustc.edu.cn; Chen, Yangli, E-mail: chenyl@mail.ustc.edu.cn; Guo, Yun, E-mail: guoyun79@ustc.edu.cn

    2016-07-15

    Highlights: • Porous-jump treatment is applied to CFD simulation on flow blockages. • Porous-jump treatment predicts consistent results with direct CFD treatment. • Relap5 predicts abnormal flow rate profiles in MTR SFA blockage scenario. • Relap5 fails to simulate annular heat flux in blockage case of annular assembly. • Porous-jump treatment provides reasonable and generalized CFD results. - Abstract: Inlet flow blockages in both flat and annular plate-type fuel assemblies are simulated by (Computational Fluid Dynamics) CFD and system analysis methods, with blockage ratio ranging from 60 to 90%. For all the blockage scenarios, mass flow rate of the blocked channel drops dramatically as blockage ratio increases, while mass flow rates of non-blocked channels are almost steady. As a result of over-simplifications, the system code fails to capture details of mass flow rate profiles of non-blocked channels and power redistribution of fuel plates. In order to acquire generalized CFD results, a new blockage modeling method is developed by using the porous-jump condition. For comparisons, direct CFD simulations are conducted toward postulated blockages. For the porous-jump treatment, conservative flow and heat transfer conditions are predicted for the blocked channel, while consistent predictions are obtained for non-blocked channels. Besides, flow fields in the blocked channel, asymmetric power redistributions of fuel plates, and complex heat transfer phenomena in annular fuel assembly are obtained and discussed. The present study indicates that the porous-jump condition is a reasonable blockage modeling method, which predicts generalized CFD results for flow blockages.

  8. Asthma: a comparison of animal models using stereological methods

    Directory of Open Access Journals (Sweden)

    D. M. Hyde

    2006-12-01

    Full Text Available Asthma is a worldwide health problem that affects 300 million people, as estimated by the World Health Organization. A key question in light of this statistic is: "what is the most appropriate laboratory animal model for human asthma?" The present authors used stereological methods to assess airways in adults and during post-natal development, and their response to inhaled allergens to compare rodents and nonhuman primates to responses in humans. An epithelial–mesenchymal trophic unit was defined in which all of the compartments interact with each other. Asthma manifests itself by altering not only the epithelial compartment but also other compartments (e.g. interstitial, vascular, immunological and nervous. All of these compartments show significant alteration in an airway generation-specific manner in rhesus monkeys but are limited to the proximal airways in mice. The rhesus monkey model shares many of the key features of human allergic asthma including the following: 1 allergen-specific immunoglobulin (IgE and skin-test positivity; 2 eosinophils and IgE+ cells in airways; 3 a T-helper type 2 cytokine profile in airways; 4 mucus cell hyperplasia; 5 subepithelial fibrosis; 6 basement membrane thickening; and 7 persistent baseline hyperreactivity to histamine or methacholine. In conclusion, the unique responses to inhaled allergens shown in rhesus monkeys make it the most appropriate animal model of human asthma.

  9. Correction of placement error in EBL using model based method

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  10. Determining $H_0$ with a model-independent method

    CERN Document Server

    Wu, Puxun; Yu, Hongwei

    2015-01-01

    In this letter, by using the type Ia supernovae (SNIa) to provide the luminosity distance (LD) directly, which is dependent on the value of the Hubble constant $H_0= 100 h\\; {\\rm km\\; s^{-1}\\; Mpc^{-1}}$, and the angular diameter distance from galaxy clusters or baryon acoustic oscillations (BAOs) to give the derived LD according to the distance duality relation, we propose a model-independent method to determine $h$ from the fact that different observations should give the same LD at a redshift. Combining the Union 2.1 SNIa and galaxy cluster data, we obtain that at the $1\\sigma$ confidence level (CL) $h=0.589\\pm0.030$ for the sample of the elliptical $\\beta$ model for galaxy clusters, and $h=0.635\\pm0.029$ for that of the spherical $\\beta$ model. The former is smaller than the values from other observations, while the latter is consistent with the Planck result at the $1\\sigma$ CL and agrees very well with the value reconstructed directly from the $H(z)$ data. With the Union 2.1 SNIa and BAO measurements, a...

  11. Discrete Method of Images for 3D Radio Propagation Modeling

    Science.gov (United States)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  12. NIST Combinatorial Methods Center: Model for Industrial Outreach

    Science.gov (United States)

    Amis, Eric J.; Karim, Alamgir

    2002-03-01

    The measurements, standards, and test methods developed by NIST, in partnership with other organizations, often help unlock the potential of new discoveries and budding technologies. Combinatorial methods are a textbook example. These emerging tools can speed innovation in many fields - pharmaceuticals, chemistry, and, most recently, materials. In the diverse realm of materials, combinatorial methods hold promise for all classes, including metals, polymers, ceramics, and biomaterials. NIST has established the NCMC as a model for collaboration, in order to share expertise, facilities, resources, and information thereby reducing obstacles to participating in this fast-moving and instrument-intensive area. Although collaborations with multiple partners can be difficult, the goal is to foster cross-fertilization of ideas and research strategies, and to spur progress on many fronts by crossing boundaries of organizations, disciplines, and interests. Members have access to technical workshops, short courses, data libraries, and electronic bulletin boards; they can participate in non-proprietary focused projects; and they can enter into specific cooperative research and development agreements with controlled intellectual property.

  13. Simplexmethod建模研讨%Study on Simplex Method Modelling

    Institute of Scientific and Technical Information of China (English)

    宋占奎

    2011-01-01

    A mathematical model of linear programming was established by using simplex method and the optimal solution was obtained.Since the simplex table reflects all information about linear programming,therefore the optimal solution can be easily obtained using simplex method.To divide the linear programming as the standard type using simplex method,according to the standard type of question,the primary line of transformation was carried out,changing into the other element equalized to 0 except the principal element 1 of the principal element row,when base variable values were all non-negative,the optimal solution to the question was obtained.%目的 Linear Programming的simplexmethod建模求最优解。方法应用simplexmethod.结果建立了LinearProgramming的数学模型并用simplexmethod求得了最优解.结论因为单纯形表反映了Linear Programming的所有信息,故用simplexmethod可简便地求得最优解.simplexmethod的基本思路是:先将Linear Programming用sim-plexmethod划为标准型,根据问题的标准型,进行初等行变换,将主元素列除主元素化为1外其余的元素均化为0,当基变量值全为非负时,问题就得到了最优解.

  14. Non-classical method of modelling of vibrating mechatronic systems

    Science.gov (United States)

    Białas, K.; Buchacz, A.

    2016-08-01

    This work presents non-classical method of modelling of mechatronic systems by using polar graphs. The use of such a method enables the analysis and synthesis of mechatronic systems irrespective of the type and number of the elements of such a system. The method id connected with algebra of structural numbers. The purpose of this paper is also introduces synthesis of mechatronic system which is the reverse task of dynamics. The result of synthesis is obtaining system meeting the defined requirements. This approach is understood as design of mechatronic systems. The synthesis may also be applied to modify the already existing systems in order to achieve a desired result. The system was consisted from mechanical and electrical elements. Electrical elements were used as subsystem reducing unwanted vibration of mechanical system. The majority of vibration occurring in devices and machines is harmful and has a disadvantageous effect on their condition. Harmful impact of vibration is caused by the occurrence of increased stresses and the loss of energy, which results in faster wear machinery. Vibration, particularly low-frequency vibration, also has a negative influence on the human organism. For this reason many scientists in various research centres conduct research aimed at the reduction or total elimination of vibration.

  15. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  16. Statistical methods in joint modeling of longitudinal and survival data

    Science.gov (United States)

    Dempsey, Walter

    Survival studies often generate not only a survival time for each patient but also a sequence of health measurements at annual or semi-annual check-ups while the patient remains alive. Such a sequence of random length accompanied by a survival time is called a survival process. Ordinarily robust health is associated with longer survival, so the two parts of a survival process cannot be assumed independent. The first part of the thesis is concerned with a general technique---reverse alignment---for constructing statistical models for survival processes. A revival model is a regression model in the sense that it incorporates covariate and treatment effects into both the distribution of survival times and the joint distribution of health outcomes. The revival model also determines a conditional survival distribution given the observed history, which describes how the subsequent survival distribution is determined by the observed progression of health outcomes. The second part of the thesis explores the concept of a consistent exchangeable survival process---a joint distribution of survival times in which the risk set evolves as a continuous-time Markov process with homogeneous transition rates. A correspondence with the de Finetti approach of constructing an exchangeable survival process by generating iid survival times conditional on a completely independent hazard measure is shown. Several specific processes are detailed, showing how the number of blocks of tied failure times grows asymptotically with the number of individuals in each case. In particular, we show that the set of Markov survival processes with weakly continuous predictive distributions can be characterized by a two-dimensional family called the harmonic process. The outlined methods are then applied to data, showing how they can be easily extended to handle censoring and inhomogeneity among patients.

  17. Theoretical Modelling Methods for Thermal Management of Batteries

    Directory of Open Access Journals (Sweden)

    Bahman Shabani

    2015-09-01

    Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.

  18. Infinitesimal dividing modeling method for dual suppliers inventory model with random lead times

    Institute of Scientific and Technical Information of China (English)

    Ji Pengcheng; Song Shiji; Wu Cheng

    2009-01-01

    As one of the basic inventory cost models, the (Q, r) inventory cost model of dual suppliers with random procurement lead time is mostly formulated by using the concepts of "effective lead time" and "lead time demand", which may lead to an imprecise inventory cost. Through the real-time statistic of the inventory quantities, this paper considers the precise (Q, r) inventory cost model of dual supplier procurement by using an infinitesimal dividing method. The traditional modeling method of the inventory cost for dual supplier procurement includes complex procedures. To reduce the complexity effectively, the presented method investigates the statistics properties in real-time of the inventory quantities with the application of the infinitesimal dividing method. It is proved that the optimal holding and shortage costs of dual supplier procurement are less than those of single supplier procurement respectively. With the assumption that both suppliers have the same distribution of lead times, the convexity of the cost function per unit time is proved. So the optimal solution can be easily obtained by applying the classical convex optimization methods. The numerical examples are given to verify the main conclusions.

  19. Earthquake Source Modeling using Time-Reversal or Adjoint Methods

    Science.gov (United States)

    Hjorleifsdottir, V.; Liu, Q.; Tromp, J.

    2007-12-01

    In recent years there have been great advances in earthquake source modeling. Despite the effort, many questions about earthquake source physics remain unanswered. In order to address some of these questions, it is useful to reconstruct what happens on the fault during an event. In this study we focus on determining the slip distribution on a fault plane, or a moment-rate density, as a function of time and space. This is a difficult process involving many trade offs between model parameters. The difficulty lies in the fact that earthquakes are not a controlled experiment, we don't know when and where they will occur, and therefore we have only limited control over what data will be acquired for each event. As a result, much of the advance that can be made, is by extracting more information out of the data that is routinely collected. Here we use a technique that uses 3D waveforms to invert for the slip on a fault plane during rupture. By including 3D wave-forms we can use parts of the wave-forms that are often discarded, as they are altered by structural effects in ways that cannot be accurately predicted using 1D Earth models. However, generating 3D synthetic is computationally expensive. Therefore we turn to an `adjoint' method (Tarantola Geoph.~1984, Tromp et al.~GJI 2005), that reduces the computational cost relative to methods that use Green's function libraries. In it's simplest form an adjoint method for inverting for source parameters can be viewed as a time-reversal experiment performed with a wave-propagation code (McMechan GJRAS 1982). The recorded seismograms are inserted as simultaneous sources at the location of the receiver and the computed wave field (which we call the adjoint wavefield) is recorded on an array around the earthquake location. Here we show, mathematically, that for source inversions for a moment tensor (distributed) source, the time integral of the adjoint strain is the quantity to monitor. We present the results of time

  20. Forty Lines of Evidence for Condensed Matter — The Sun on Trial: Liquid Metallic Hydrogen as a Solar Building Block

    Directory of Open Access Journals (Sweden)

    Robitaille P.-M.

    2013-10-01

    Full Text Available Our Sun has confronted humanity with overwhelming evidence that it is comprised of condensed matter. Dismissing this reality, the standard solar models continue to be anchored on the gaseous plasma. In large measure, the endurance of these theories can be attributed to 1 the mathematical elegance of the equations for the gaseous state, 2 the apparent success of the mass-luminosity relationship, and 3 the long-lasting influence of leading proponents of these models. Unfortunately, no direct physical finding supports the notion that the solar body is gaseous. Without exception, all observations are most easily explained by recognizing that the Sun is primarily comprised of condensed matter. However, when a physical characteristic points to condensed matter, a postori arguments are invoked to account for the behavior using the gaseous state. In isolation, many of these treatments appear plausible. As a result, the gaseous models continue to be accepted. There seems to be an overarching belief in solar science that the problems with the gaseous models are few and inconsequential. In reality, they are numerous and, while often subtle, they are sometimes daunting. The gaseous equations of state have introduced far more dilemmas than they have solved. Many of the conclusions derived from these approaches are likely to have led solar physics down unproductive avenues, as deductions have been accepted which bear little or no relationship to the actual nature of the Sun. It could be argued that, for more than 100 years, the gaseous models have prevented mankind from making real progress relative to understanding the Sun and the universe. Hence, the Sun is now placed on trial. Forty lines of evidence will be presentedbthat the solar body is comprised of, and surrounded by, condensed matter. These ‘proofs’ can be divided into seven broad categories: 1 Planckian, 2 spectroscopic, 3 structural, 4 dynamic, 5 helioseismic, 6 elemental, and 7 earthly