WorldWideScience

Sample records for model methods forty

  1. Forty years of 9Sr in situ migration: importance of soil characterization in modeling transport phenomena

    International Nuclear Information System (INIS)

    Fernandez, J.M.; Piault, E.; Macouillard, D.; Juncos, C.

    2006-01-01

    In 1960 experiments were carried out on the transfer of 9 Sr between soil, grapes and wine. The experiments were conducted in situ on a piece of land limited by two control strips. The 9 Sr migration over the last 40 years was studied by performing radiological and physico-chemical characterizations of the soil on eight 70 cm deep cores. The vertical migration modeling of 9 Sr required the definition of a triple layer conceptual model integrating the rainwater infiltration at constant flux as the only external factor of influence. Afterwards the importance of a detailed soil characterization for modeling was discussed and satisfactory simulation of the 9 Sr vertical transport was obtained and showed a calculated migration rate of about 1.0 cm year -1 in full agreement with the in situ measured values. The discussion was regarding some of the key parameters such as granulometry, organic matter content (in the Van Genuchten parameter determination), Kd and the efficient rainwater infiltration. Besides the experimental data, simplifying assumptions in modeling such as water-soil redistribution calculation and factual discontinuities in conceptual model were examined

  2. 26 CFR 7.48-2 - Election of forty-percent method of determining investment credit for movie and television films...

    Science.gov (United States)

    2010-04-01

    ... investment credit for movie and television films placed in service in a taxable year beginning before January... Election of forty-percent method of determining investment credit for movie and television films placed in... the Tax Reform Act of 1976 (90 Stat. 1595), taxpayers who placed movie or television films (here...

  3. Energy Return on Investment (EROI) for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production

    Science.gov (United States)

    Brandt, Adam R.; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah

    2015-01-01

    Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services. PMID:26695068

  4. Energy Return on Investment (EROI for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production.

    Directory of Open Access Journals (Sweden)

    Adam R Brandt

    Full Text Available Studies of the energy return on investment (EROI for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs, nor does it include other indirect energy uses such as labor or services.

  5. Energy Return on Investment (EROI) for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production.

    Science.gov (United States)

    Brandt, Adam R; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah

    2015-01-01

    Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services.

  6. Scattered light characterization of FORTIS

    Science.gov (United States)

    McCandliss, Stephan R.; Carter, Anna; Redwine, Keith; Teste, Stephane; Pelton, Russell; Hagopian, John; Kutyrev, Alexander; Li, Mary J.; Moseley, S. Harvey

    2017-08-01

    We describe our efforts to build a Wide-Field Lyman alpha Geocoronal simulator (WFLaGs) for characterizing the end-to-end sensitivity of FORTIS (Far-UV Off Rowland-circle Telescope for Imaging and Spectroscopy) to scattered Lyman α emission from outside of the nominal (1/2 degree)2 field-of-view. WFLaGs is a 50 mm diameter F/1 aluminum parabolic collimator fed by a hollow cathode discharge lamp with a 80 mm clear MgF2 window housed in a vacuum skin. It creates emission over a 10 degree FOV. WFLaGS will allow us to validate and refine a recently developed scattered light model and verify our scatter light mitigation strategies, which will incorporate low scatter baffle materials, and possibly 3-d printed light traps, covering exposed scatter centers. We present measurements of scattering intensity of Lyman alpha as a function of angle with respect to the specular reflectance direction for several candidate baffle materials. Initial testing of WFLaGs will be described.

  7. Forty Thousand Years of Advertisement

    Directory of Open Access Journals (Sweden)

    Konstantin Lidin

    2006-05-01

    Full Text Available The roots of advertisement are connected with reclamations, claims and arguments. No surprise that many people treat it with distrust, suspicion and irritation.Nobody loves advertisement (except its authors and those who order it, nobody watches it, everybody despises it and get annoyed because of it. But newspapers, magazines, television and city economy in general cannot do without it. One keeps on arguing whether to prohibit advertisement, to restrict its expansion, to bring in stricter regulations on advertisement…If something attracts attention, intrigues, promises to make dreams true and arouses desire to join - it should be considered as advertisement. This definition allows saying with no doubts: yes, advertisement did existed in the most ancient strongest cultures. Advertisement is as old as the humane civilization. There have always been the objects to be advertised, and different methods appeared to reach those goals.Advertisement techniques and topics appear, get forgotten and appear again in other places and other times. Sometimes the author of advertisement image has no idea about his forerunners and believes he is the discoverer. A skillful designer with high level of professionalism deliberately uses images from past centuries. The professional is easily guided by historical prototypes.But there is another type of advertisement, its prototypes cannot be found in museums. It does not suppose any respect, because it is built on scornful attitude towards the spectator.However, basically the advertisement is made by professional designers, and in this case ignorance is inadmissible. Even if we many times appeal to Irkutsk designers to work on raising their cultural level of advertisements, anyhow, orders will be always made by those who pay. Unless Its Majesty Ruble stands for Culture, those appeals are of no use.

  8. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  9. Getting started with FortiGate

    CERN Document Server

    Fabbri, Rosato

    2013-01-01

    This book is a step-by-step tutorial that will teach you everything you need to know about the deployment and management of FortiGate, including high availability, complex routing, various kinds of VPN working, user authentication, security rules and controls on applications, and mail and Internet access.This book is intended for network administrators, security managers, and IT pros. It is a great starting point if you have to administer or configure a FortiGate unit, especially if you have no previous experience. For people that have never managed a FortiGate unit, the book helpfully walks t

  10. The first forty years, 1947-1987

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, M.S. (ed.); Cohen, A.; Petersen, B.

    1987-01-01

    This report commemorates the fortieth anniversary of Brookhaven National Laboratory by representing a historical overview of research at the facility. The chapters of the report are entitled: The First Forty Years, Brookhaven: A National Resource, Fulfilling a Mission - Brookhaven's Mighty Machines, Marketing the Milestones in Basic Research, Meeting National Needs, Making a Difference in Everyday Life, and Looking Forward.

  11. The first forty years, 1947-1987

    International Nuclear Information System (INIS)

    Rowe, M.S.; Cohen, A.; Petersen, B.

    1987-01-01

    This report commemorates the fortieth anniversary of Brookhaven National Laboratory by representing a historical overview of research at the facility. The chapters of the report are entitled: The First Forty Years, Brookhaven: A National Resource, Fulfilling a Mission - Brookhaven's Mighty Machines, Marketing the Milestones in Basic Research, Meeting National Needs, Making a Difference in Everyday Life, and Looking Forward

  12. Forty Ninth Refresher Course in Experimental Physics

    Indian Academy of Sciences (India)

    IAS Admin

    This forty-ninth Course will be held from 6 to 21, June 2013, at its premises in Bangalore. Participants in this course will gain hands-on experience with about 25 experiments, some at the BSc. level and some at the MSc. level with a low cost kit developed for the Indian Academy of Sciences, Bangalore, and manufactured by ...

  13. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  14. Forty cases of maxillary sinus carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Go; Yamada, Shoichiro; Sawatsubashi, Motohiro; Miyazaki, Junji; Tsuda, Kuniyoshi; Inokuchi, Akira [Saga Medical School (Japan)

    2002-01-01

    Forty patients with squamous cell carcinoma in the maxillary sinus were investigated between 1989 and 1999. They consisted of 28 males and 12 females. Their ages ranged from 18 to 84 years (mean 62 years). According to the 1987 UICC TNM classification system, 3 patients were classified as stage II, 3 were stage III and 34 were stage IV. The overall three-year and five-year survival rates were 52% and 44%, respectively. Local recurrence was observed in 11 stage IV cases and 10 of them were not controlled. For further improving the prognosis of such patients, new techniques such as skull base surgery, super selective intraarterial chemotherapy, and concurrent chemo-radiation should be included in the treatment regimen. (author)

  15. Synthesis of the elements in stars: forty years of progress

    Energy Technology Data Exchange (ETDEWEB)

    Wallerstein, G. [Department of Astronomy, University of Washington, Seattle, Washington 98195 (United States); Iben, I. Jr. [University of Illinois, 1002 West Green Street, Urbana, Illinois 61801 (United States); Parker, P. [Yale University, New Haven, Connecticut 06520-8124 (United States); Boesgaard, A.M. [Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, Hawaii 96822 (United States); Hale, G.M. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Champagne, A.E. [University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27594 (United States)]|[Triangle Universities Nuclear Laboratory, Duke University, Durham, North Carolina 27706 (United States); Barnes, C.A. [California Institute of Technology, Pasadena, California 91125 (United States); Kaeppeler, F. [Forschungzentrum, Karlsruhe, D-76021 (Germany); Smith, V.V. [University of Texas at El Paso, El Paso, Texas 79968-0515 (United States); Hoffman, R.D. [Steward Observatory, University of Arizona, Tucson, Arizona 85721 (United States); Timmes, F.X. [University of California at Santa Cruz, California 95064 (United States); Sneden, C. [University of Texas, Austin, Texas 78712 (United States); Boyd, R.N. [Ohio State University, Columbus, Ohio 43210 (United States); Meyer, B.S. [Clemson University, Clemson, South Carolina 29630 (United States); Lambert, D.L. [University of Texas, Austin, Texas 78712 (United States)

    1997-10-01

    Forty years ago Burbidge, Burbidge, Fowler, and Hoyle combined what we would now call fragmentary evidence from nuclear physics, stellar evolution and the abundances of elements and isotopes in the solar system as well as a few stars into a synthesis of remarkable ingenuity. Their review provided a foundation for forty years of research in all of the aspects of low energy nuclear experiments and theory, stellar modeling over a wide range of mass and composition, and abundance studies of many hundreds of stars, many of which have shown distinct evidence of the processes suggested by B{sup 2}FH. In this review we summarize progress in each of these fields with emphasis on the most recent developments. {copyright} {ital 1997} {ital The American Physical Society}

  16. Developing the Business Modelling Method

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, B; Shishkov, Boris

    2011-01-01

    Currently, business modelling is an art, instead of a science, as no scientific method for business modelling exists. This, and the lack of using business models altogether, causes many projects to end after the pilot stage, unable to fulfil their apparent promise. We propose a structured method to

  17. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  18. History, Archaeology and the Bible Forty Years after "Historicity"

    DEFF Research Database (Denmark)

    In History, Archaeology and the Bible Forty Years after “Historicity”, Hjelm and Thompson argue that a ‘crisis’ broke in the 1970s, when several new studies of biblical history and archaeology were published, questioning the historical-critical method of biblical scholarship. The crisis formed...... articles from some of the field’s best scholars with comprehensive discussion of historical, archaeological, anthropological, cultural and literary approaches to the Hebrew Bible and Palestine’s history. The essays question: “How does biblical history relate to the archaeological history of Israel...

  19. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  20. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  1. Forty years of a psychiatric day hospital

    Directory of Open Access Journals (Sweden)

    Rosário Curral

    2014-03-01

    Full Text Available INTRODUCTION: Day hospitals in psychiatry are a major alternative to inpatient care today, acting as key components of community and social psychiatry. Objective: To study trends in the use of psychiatric day hospitals over the last decades of the 20th century and the first decade of the 21st century, focusing on patient age, sex, and diagnostic group, using data from Centro Hospitalar São João, Porto, Portugal. METHODS: Data corresponding to years 1970 to 2009 were collected from patient files. Patients were classified into seven diagnostic groups considering their primary diagnoses only. RESULTS: Mean age upon admission rose from 32.7±12.1 years in the second half of the 1970s to 43.5±12.2 years in 2005-2009 (p for trend < 0.001. Most patients were female (63.2%, however their proportion decreased from nearly 70% in the 1970s to 60% in the first decade of the 21st century. In males, until the late 1980s, neurotic disorders (E were the most common diagnosis, accounting for more than one third of admissions. In the subsequent years, this proportion decreased, and the number of admissions for schizophrenia (C exceeded 50% in 2004- 2009. In females, until the late 1980s, affective disorders (D and neurotic disorders (E, similarly distributed, accounted for most admissions. From the 1990s on, the proportion of neurotic disorders (E substantially decreased, and affective disorders (D came to represent more than 50% of all admissions. CONCLUSIONS: Mean age upon admission rose with time, as did the percentage of female admissions, even though the latter tendency weakened in the last 10 years assessed. There was also an increase in the proportion of patients with schizophrenia.

  2. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  3. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  4. The biology of memory: a forty-year perspective.

    Science.gov (United States)

    Kandel, Eric R

    2009-10-14

    In the forty years since the Society for Neuroscience was founded, our understanding of the biology of memory has progressed dramatically. From a historical perspective, one can discern four distinct periods of growth in neurobiological research during that time. Here I use that chronology to chart a personalized and selective course through forty years of extraordinary advances in our understanding of the biology of memory storage.

  5. Canada's uranium future, based on forty years of development

    International Nuclear Information System (INIS)

    Aspin, N.; Dakers, R.G.

    1982-09-01

    Canada's role as a major supplier of uranium has matured through the cyclical markets of the past forty years. Present resource estimates would support a potential production capability by the late 1980s 50 per cent greater than the peak production of 12 200 tonnes uranium in 1959. New and improved exploration techniques are being developed as uranium deposits become more difficult to discover. Radiometric prospecting of glacial boulder fields and the use of improved airborne and ground geophysical methods have contributed significantly to recent discoveries in Saskatchewan. Advances have also been made in the use of airborne radiometric reconnaissance, borehole logging, emanometry (radon and helium gas) and multi-element regional geochemistry techniques. Higher productivity in uranium mining has been achieved through automation and mechanization, while improved ventilation systems in conjunction with underground environmental monitoring have contributed to worker health and safety. Improved efficiency is being achieved in all phases of ore processing. Factors contributing to the increased time required to develop uranium mines and mills from a minimum of three years in the 1950s to the ten years typical of today, are discussed. The ability of Canada's uranium refinery to manufacture ceramic grade UO 2 powder to consistent standards has been a major factor in the successful development of high density natural uranium fuel for the CANDU (CANada Deuterium Uranium) reactor. Over 400 000 fuel assemblies have been manufactured by three companies. The refinery is undertaking a major expansion of its capacity

  6. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  7. French in Lesotho schools forty years after independence ...

    African Journals Online (AJOL)

    Most independent African states are now, like Lesotho, about forty years old. What has become of foreign languages such as French that once thrived under colonial rule albeit mostly in schools targeting non-indigenous learners? In Lesotho French seems to be the preserve of private or “international” schools. Can African ...

  8. Institute of fundamental research: forty years of research

    International Nuclear Information System (INIS)

    1986-01-01

    This document is aimed at illustration of forty years of fundamental research at CEA. It has not been conceived to give an exhaustive view of current research at IRF, but to give an illustration of these researches to non-specialists, and even to non-scientifists [fr

  9. Energy models: methods and trends

    International Nuclear Information System (INIS)

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  10. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  11. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  12. Modelling Method of Recursive Entity

    Science.gov (United States)

    Amal, Rifai; Messoussi, Rochdi

    2012-01-01

    With the development of the Information and Communication Technologies, great masses of information are published in the Web. In order to reuse, to share and to organise them in distance formation and e-learning frameworks, several research projects have been achieved and various standards and modelling languages developed. In our previous…

  13. Developing a TQM quality management method model

    OpenAIRE

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This model describes the primary quality management methods which may be used to assess an organization's present strengths and weaknesses with regard to its use of quality management methods. This model ...

  14. Spectral methods applied to Ising models

    International Nuclear Information System (INIS)

    DeFacio, B.; Hammer, C.L.; Shrauner, J.E.

    1980-01-01

    Several applications of Ising models are reviewed. A 2-d Ising model is studied, and the problem of describing an interface boundary in a 2-d Ising model is addressed. Spectral methods are used to formulate a soluble model for the surface tension of a many-Fermion system

  15. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  16. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  17. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  18. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  19. Soil Fertility Management a Century Ago in Farmers of Forty Centuries

    Directory of Open Access Journals (Sweden)

    Joseph R. Heckman

    2013-06-01

    Full Text Available Published just over a century ago, Farmers of Forty Centuries or Permanent Agriculture in China, Korea, and Japan, served to document the viability and productivity of traditional agricultural systems that relied on composting, and complete recycling of all types of natural waste materials, as a means of sustaining soil fertility. This cardinal rule of waste management and organic soil husbandry became known as “the law of return” to organic farming. With regards to nutrient management, organic farming methods uses restorative cultural practices that include the law of return principle which encourages the closure of nutrient cycles. In these respects, organic farming methods are arguably more firmly grounded in ecology and sustainability than the promotions of the chemical fertilizer industry which has largely displaced traditional soil fertility practices. Farmers of Forty Centuries is a classic with valuable lessons and experience to offer towards teaching modern concepts in sustainable agriculture.

  20. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  1. Forty Four Years of Debate: The Impact of Race, Community and Conflict

    OpenAIRE

    Robert Moore

    2011-01-01

    Race, Community and Conflict by John Rex and Robert Moore was published in 1967 and had a considerable public impact through press and TV. Forty four years later it is still widely cited in research on British urban society and 'race relations'. It is used in teaching research methods, theory, urban sociology and 'race relations' to undergrad-uates. This article describes and explains the immediate impact of the book and its more lasting contribution to sociology. Race, Community and Conflict...

  2. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  3. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  4. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  5. Level Crossing Methods in Stochastic Models

    CERN Document Server

    Brill, Percy H

    2008-01-01

    Since its inception in 1974, the level crossing approach for analyzing a large class of stochastic models has become increasingly popular among researchers. This volume traces the evolution of level crossing theory for obtaining probability distributions of state variables and demonstrates solution methods in a variety of stochastic models including: queues, inventories, dams, renewal models, counter models, pharmacokinetics, and the natural sciences. Results for both steady-state and transient distributions are given, and numerous examples help the reader apply the method to solve problems fa

  6. A matter of meaning: reflections on forty years of JCL.

    Science.gov (United States)

    Nelson, Katherine

    2014-07-01

    The entry into language via first words and, the acquisition of word meanings is considered from the perspective of publications in the Journal of Child Language over the past forty years. Problems in achieving word meanings include the disparate and sparse concepts available to the child from past prelanguage experience. Variability in beginning word learning and in its progress along a number of dimensions suggests the problems that children may encounter, as well as the strategies and styles they adopt to make progress. Social context and adult practices are vitally involved in the success of this process. Whereas much headway has been made over the past decades, much remains to be revealed through dynamic systems theory and developmental semiotic analyses, as well as laboratory research aimed at social context conditions.

  7. Forty years of training program in the JAERI

    International Nuclear Information System (INIS)

    1998-03-01

    This report is to compile the past training program of researchers, engineers and regulatory members at the NuTEC (Nuclear Technology and Education Center) of Japan Atomic Energy Research Institute and the past basic seminars for the public, in addition to advice and perspective on the future program from relevant experts, in commemoration of the forty years of the NuTEC. It covers the past five years of educational courses and seminars in utilization of radioisotopes and nuclear energy for domestic and for international training provided at Tokyo and Tokai Education Centers and covers the activity of the Asia-Pacific nuclear technology transfer, including the activity of various committees and meetings. Especially, fifty six experts and authorities have contributed to the report with positive advice and perspective on the training program in the 21st century based on their reminiscences. (author)

  8. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  9. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  10. The housing market: modeling and assessment methods

    Directory of Open Access Journals (Sweden)

    Zapadnjuk Evgenij Aleksandrovich

    2016-10-01

    Full Text Available This paper analyzes the theoretical foundations of econometric simulation model that can be used to study the housing sector. Shows the methods of the practical use of correlation and regression models in the analysis of the status and prospects of development of the housing market.

  11. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  12. Antenatal glucocorticoids: where are we after forty years?

    Science.gov (United States)

    McKinlay, C J D; Dalziel, S R; Harding, J E

    2015-04-01

    Since their introduction more than forty years ago, antenatal glucocorticoids have become a cornerstone in the management of preterm birth and have been responsible for substantial reductions in neonatal mortality and morbidity. Clinical trials conducted over the past decade have shown that these benefits may be increased further through administration of repeat doses of antenatal glucocorticoids in women at ongoing risk of preterm and in those undergoing elective cesarean at term. At the same time, a growing body of experimental animal evidence and observational data in humans has linked fetal overexposure to maternal glucocorticoids with increased risk of cardiovascular, metabolic and other disorders in later life. Despite these concerns, and somewhat surprisingly, there has been little evidence to date from randomized trials of longer-term harm from clinical doses of synthetic glucocorticoids. However, with wider clinical application of antenatal glucocorticoid therapy there has been greater need to consider the potential for later adverse effects. This paper reviews current evidence for the short- and long-term health effects of antenatal glucocorticoids and discusses the apparent discrepancy between data from randomized clinical trials and other studies.

  13. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  14. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  15. Modelling asteroid brightness variations. I - Numerical methods

    Science.gov (United States)

    Karttunen, H.

    1989-01-01

    A method for generating lightcurves of asteroid models is presented. The effects of the shape of the asteroid and the scattering law of a surface element are distinctly separable, being described by chosen functions that can easily be changed. The shape is specified by means of two functions that yield the length of the radius vector and the normal vector of the surface at a given point. The general shape must be convex, but spherical concavities producing macroscopic shadowing can also be modeled.

  16. Modeling Storm Surges Using Discontinuous Galerkin Methods

    Science.gov (United States)

    2016-06-01

    STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) Storm surges have a...model. One of the governing systems of equations used to model storm surges’ effects is the Shallow Water Equations (SWE). In this thesis, we solve the...fundamental truth, we found the error norm of the implicit method to be minimal. This study focuses on the impacts of a simulated storm surge in La Push

  17. Habitat evaluation for outbreak of Yangtze voles (Microtus fortis) and management implications.

    Science.gov (United States)

    Xu, Zhenggang; Zhao, Yunlin; Li, Bo; Zhang, Meiwen; Shen, Guo; Wang, Yong

    2015-05-01

    Rodent pests severely damage agricultural crops. Outbreak risk models of rodent pests often do not include sufficient information regarding geographic variation. Habitat plays an important role in rodent-pest outbreak risk, and more information about the relationship between habitat and crop protection is urgently needed. The goal of the present study was to provide an outbreak risk map for the Dongting Lake region and to understand the relationship between rodent-pest outbreak variation and habitat distribution. The main rodent pests in the Dongting Lake region are Yangtze voles (Microtus fortis). These pests cause massive damage in outbreak years, most notably in 2007. Habitat evaluation and ecological details were obtained by analyzing the correlation between habitat suitability and outbreak risk, as indicated by population density and historical events. For the source-sink population, 96.18% of Yangtze vole disaster regions were covered by a 10-km buffer zone of suitable habitat in 2007. Historical outbreak frequency and peak population density were significantly correlated with the proportion of land covered by suitable habitat (r = 0.68, P = 0.04 and r = 0.76, P = 0.03, respectively). The Yangtze vole population tends to migrate approximately 10 km in outbreak years. Here, we propose a practical method for habitat evaluation that can be used to create integrated pest management plans for rodent pests when combined with basic information on the biology, ecology and behavior of the target species. © 2014 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.

  18. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    conditions for physical attainability, in the context that, it has to be symmetric and positive semidefinite. FMO problems have been studied for the last two decades in many articles that led to the development of a wide range of models, methods, and theories. As the design variables in FMO are the local...... programs. The method has successfully obtained solutions to large-scale classical FMO problems of simultaneous analysis and design, nested and dual formulations. The second goal is to extend the method and the FMO problem formulations to general laminated shell structures. The thesis additionally addresses...

  19. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  20. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  1. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This

  2. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains on ...

  3. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains on a rai...

  4. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  5. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  6. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  7. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  8. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  9. The Schwarzschild Method for Building Galaxy Models

    Science.gov (United States)

    de Zeeuw, P. T.

    1998-09-01

    Martin Schwarzschild is most widely known as one of the towering figures of the theory of stellar evolution. However, from the early fifties onward he displayed a strong interest in dynamical astronomy, and in particular in its application to the structure of star clusters and galaxies. This resulted in a string of remarkable investigations, including the discovery of what became known as the Spitzer-Schwarzschild mechanism, the invention of the strip count method for mass determinations, the demonstration of the existence of dark matter on large scales, and the study of the nucleus of M31, based on his own Stratoscope II balloon observations. With his retirement approaching he decided to leave the field of stellar evolution, and to make his life--long hobby of stellar dynamics a full-time occupation, and to tackle the problem of self-consistent equilibria for elliptical galaxies, which by then were suspected to have a triaxial shape. Rather than following classical methods, which had trouble already in dealing with axisymmetric systems, he invented a simple numerical technique, which seeks to populate individual stellar orbits in the galaxy potential so as to reproduce the associated model density. This is now known as Schwarzschild's method. He showed by numerical calculation that most stellar orbits in a triaxial potential relevant for elliptical galaxies have two effective integrals of motion in addition to the classical energy integral, and then constructed the first ever self-consistent equilibrium model for a realistic triaxial galaxy. This provided a very strong stimulus to research in the dynamics of flattened galaxies. This talk will review how Schwarzschild's Method is used today, in problems ranging from the existence of equilibrium models as a function of shape, central cusp slope, tumbling rate, and presence of a central point mass, to modeling of individual galaxies to find stellar dynamical evidence for dark matter in extended halos, and/or massive

  10. An integrated modeling method for wind turbines

    Science.gov (United States)

    Fadaeinedjad, Roohollah

    Simulink environment to study the flicker contribution of the wind turbine in the wind-diesel system. By using a new wind power plant representation method, a large wind farm (consisting of 96 fixed speed wind turbines) is modelled to study the power quality of wind power system. The flicker contribution of wind farm is also studied with different wind turbine numbers, using the flickermeter model. Keywords. Simulink, FAST, TurbSim, AreoDyn, wind energy, doubly-fed induction generator, variable speed wind turbine, voltage sag, tower vibration, power quality, flicker, fixed speed wind turbine, wind shear, tower shadow, and yaw error.

  11. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  12. Correlations between cutaneous malignant melanoma and other cancers: An ecological study in forty European countries

    Directory of Open Access Journals (Sweden)

    Pablo Fernandez-Crehuet Serrano

    2016-01-01

    Full Text Available Background: The presence of noncutaneous neoplasms does not seem to increase the risk of cutaneous malignant melanoma; however, it seems to be associated with the development of other hematological, brain, breast, uterine, and prostatic neoplasms. An ecological transversal study was conducted to study the geographic association between cutaneous malignant melanoma and 24 localizations of cancer in forty European countries. Methods: Cancer incidence rates were extracted from GLOBOCAN database of the International Agency for Research on Cancer. We analyzed the age-adjusted and gender-stratified incidence rates for different localizations of cancer in forty European countries and calculated their correlation using Pearson′s correlation test. Results: In males, significant correlations were found between cutaneous malignant melanoma with testicular cancer (r = 0.83 [95% confidence interval (CI: 0.68-0.89], myeloma (r = 0.68 [95% CI: 0.46-0.81], prostatic carcinoma (r = 0.66 [95% CI: 0.43-0.80], and non-Hodgkin lymphoma (NHL (r = 0.63 [95% CI: 0.39-0.78]. In females, significant correlations were found between cutaneous malignant melanoma with breast cancer (r = 0.80 [95% CI: 0.64-0.88], colorectal cancer (r = 0.72 [95% CI: 0.52-0.83], and NHL (r = 0.71 [95% CI: 0.50-0.83]. Conclusions: These correlations call to conduct new studies about the epidemiology of cancer in general and cutaneous malignant melanoma risk factors in particular.

  13. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  14. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  15. Wind turbine noise modeling : a comparison of modeling methods

    International Nuclear Information System (INIS)

    Wang, L.; Strasser, A.

    2009-01-01

    All wind turbine arrays must undergo a noise impact assessment. DataKustik GmbH developed the Computer Aided Noise Abatement (Cadna/A) modeling software for calculating noise propagation to meet accepted protocols and international standards such as CONCAWE and ISO 9613 standards. The developer of Cadna/A, recommended the following 3 models for simulating wind turbine noise. These include a disk of point sources; a ring of point sources located at the tip of each blade; and a point source located at the top of the wind turbine tower hub. This paper presented an analytical comparison of the 3 models used for a typical wind turbine with a hub tower containing 3 propeller blades, a drive-train and top-mounted generator, as well as a representative wind farm, using Cadna/A. AUC, ISO and IEC criteria requirements for the meteorological input with Cadna/A for wind farm noise were also discussed. The noise predicting modelling approach was as follows: the simplest model, positioning a single point source at the top of the hub, can be used to predict sound levels for a typical wind turbine if receptors are located 250 m from the hub; a-weighted sound power levels of a wind turbine at cut-in and cut-off wind speed should be used in the models; 20 by 20 or 50 by 50 meter terrain parameters are suitable for large wind farm modeling; and ISO 9613-2 methods are recommended to predict wind farm noise with various metrological inputs based on local conditions. The study showed that the predicted sound level differences of the 3 wind turbine models using Cadna/A are less than 0.2 dB at receptors located greater than 250 m from the wind turbine hub, which fall within the accuracy range of the calculation method. All 3 models of wind turbine noise meet ISO9613-2 standards for noise prediction using Cadna/A. However, the single point source model was found to be the most efficient in terms of modeling run-time among the 3 models. 7 refs., 3 tabs., 15 figs.

  16. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  17. Mechanics, Models and Methods in Civil Engineering

    CERN Document Server

    Maceri, Franco

    2012-01-01

    „Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.

  18. The forward tracking, an optical model method

    CERN Document Server

    Benayoun, M

    2002-01-01

    This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.

  19. Experimental modeling methods in Industrial Engineering

    Directory of Open Access Journals (Sweden)

    Peter Trebuňa

    2009-03-01

    Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".

  20. Finite element modeling methods for photonics

    CERN Document Server

    Rahman, B M Azizur

    2013-01-01

    The term photonics can be used loosely to refer to a vast array of components, devices, and technologies that in some way involve manipulation of light. One of the most powerful numerical approaches available to engineers developing photonic components and devices is the Finite Element Method (FEM), which can be used to model and simulate such components/devices and analyze how they will behave in response to various outside influences. This resource provides a comprehensive description of the formulation and applications of FEM in photonics applications ranging from telecommunications, astron

  1. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  2. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  3. Functional methods in the generalized Dicke model

    International Nuclear Information System (INIS)

    Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F.

    2007-01-01

    The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g 1 and g 2 for rotating and counter-rotating terms respectively. In the limit N -> ∞, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g 1 ≠ 0 and g 2 ≠ 0). There is quantum critical behavior when the coupling constants g 1 and g 2 satisfy g 1 + g 2 =(ω 0 Ω) 1/2 , where ω 0 is the frequency of the mode of the field and Ω is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g 1 ≠ 0 and g 2 ≠ 0, recovering the well known results. Second, the case g 1 ≠ 0 and g 2 ≠ 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g 2 (ω 0 Ω) 1/2 , and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)

  4. Mathematical models and methods for planet Earth

    CERN Document Server

    Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta

    2014-01-01

    In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.

  5. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  6. FDTD method and models in optical education

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe

    2017-08-01

    In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.

  7. The accuracy of a method for printing three-dimensional spinal models.

    Directory of Open Access Journals (Sweden)

    Ai-Min Wu

    Full Text Available To study the morphology of the human spine and new spinal fixation methods, scientists require cadaveric specimens, which are dependent on donation. However, in most countries, the number of people willing to donate their body is low. A 3D printed model could be an alternative method for morphology research, but the accuracy of the morphology of a 3D printed model has not been determined.Forty-five computed tomography (CT scans of cervical, thoracic and lumbar spines were obtained, and 44 parameters of the cervical spine, 120 parameters of the thoracic spine, and 50 parameters of the lumbar spine were measured. The CT scan data in DICOM format were imported into Mimics software v10.01 for 3D reconstruction, and the data were saved in .STL format and imported to Cura software. After a 3D digital model was formed, it was saved in Gcode format and exported to a 3D printer for printing. After the 3D printed models were obtained, the above-referenced parameters were measured again.Paired t-tests were used to determine the significance, set to P0.800. The other ICC values were 0.600; none were <0.600.In this study, we provide a protocol for printing accurate 3D spinal models for surgeons and researchers. The resulting 3D printed model is inexpensive and easily obtained for spinal fixation research.

  8. Free wake models for vortex methods

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)

    1997-08-01

    The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)

  9. 77 FR 54930 - Carlyle Plastics and Resins, Formerly Known as Fortis Plastics, A Subsidiary of Plastics...

    Science.gov (United States)

    2012-09-06

    ... Employment and Training Administration Carlyle Plastics and Resins, Formerly Known as Fortis Plastics, A Subsidiary of Plastics Acquisitions Inc., Including On-Site Leased Workers From Kelly Services and Shelley... Adjustment Assistance on July 3, 2012, applicable to workers and former workers of workers of Fortis Plastics...

  10. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    . To illustrate these concepts a number of examples are used. These include models of polymer membranes, distillation and catalyst behaviour. Some detailed considerations within these models are stated and discussed. Model generation concepts are introduced and ideas of a reference model are given that shows...

  11. GREENSCOPE: A Method for Modeling Chemical Process ...

    Science.gov (United States)

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  12. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  13. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  14. Numerical methods in Markov chain modeling

    Science.gov (United States)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  15. Dynamic spatial panels : models, methods, and inferences

    NARCIS (Netherlands)

    Elhorst, J. Paul

    This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent

  16. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...

  17. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  18. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  19. Resampling methods for evaluating classification accuracy of wildlife habitat models

    Science.gov (United States)

    Verbyla, David L.; Litvaitis, John A.

    1989-11-01

    Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.

  20. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  1. A Comparison of Two Balance Calibration Model Building Methods

    Science.gov (United States)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  2. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...

  3. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  4. Ensemble Learning Method for Hidden Markov Models

    Science.gov (United States)

    2014-12-01

    Schunck, “Determining optical flow,” Artificial Inteligence , vol. 17, 1981. [77] “International campaign to ban landmines, landmiane monitor report 2013...outputs using a decision level fusion method such as an artificial v neural network or a hierarchical mixture of experts. Our approach was evaluated on...techniques such as simple algebraic [63], artificial neural networks (ANN) [1], and hierarchical mixture of experts (HME) [46] can be used. 3.3.4.1

  5. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  6. Trajectories of Marijuana Use from Adolescence to Adulthood as Predictors of Unemployment Status in the Early Forties

    Science.gov (United States)

    Zhang, Chenshu; Brook, Judith S.; Leukefeld, Carl G.; Brook, David W.

    2016-01-01

    Objectives To study the degree to which individuals in different trajectories of marijuana use are similar or different in terms of unemployment status at mean age 43. Methods We gathered longitudinal data on a prospective cohort taken from a community sample (N = 548). Forty nine percent of the original participants were females. Over 90% of the participants were white. The participants were followed from adolescence to early midlife. The mean ages of participants at the follow-up interviews were 14.1, 16.3, 22.3, 27.0, 31.9, 36.6, and 43.0, respectively. We used the growth mixture modeling (GMM) approach to identify the trajectories of marijuana use over a 29 year period. Results Five trajectories of marijuana use were identified: chronic users/decreasers (8.3%), quitters (18.6%), increasing users (7.3%), chronic occasional users (25.6%), and nonusers/experimenters (40.2%). Compared with nonusers/experimenters, chronic users/decreasers had a significantly higher likelihood of unemployment at mean age 43 (Adjusted Odds Ratio =3.51, 95% Confidence Interval = 1.13 – 10.91), even after controlling for the covariates. Conclusions and Scientific Significance The results of the associations between the distinct trajectories of marijuana use and unemployment in early midlife indicate that it is important to develop intervention programs targeting chronic marijuana use as well as unemployment in individuals at this stage of development. Results from this study should encourage clinicians, teachers, and parents to assess and treat chronic marijuana use in adolescents. PMID:26991779

  7. Diffusion in condensed matter methods, materials, models

    CERN Document Server

    Kärger, Jörg

    2005-01-01

    Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.

  8. Extrudate Expansion Modelling through Dimensional Analysis Method

    DEFF Research Database (Denmark)

    to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...... of the correlation are respectively 5.9% and 9% for the whole wheat flour and the aquatic feed extrusion. An alternative 4-coefficient equation is also suggested from the 3 dimensionless groups. The average deviations of the alternative equation are respectively 5.8% and 2.5% in correlation with the same set...

  9. Reverberation Modelling Using a Parabolic Equation Method

    Science.gov (United States)

    2012-10-01

    results obtained by other authors and methods. Résumé …..... RDDC Atlantique a élaboré un modèle de fouillis d’échos acoustiques fondé sur les modes...PE pour parabolic equation), pour déterminer la faisabilité du calcul du champ acoustique et de la réverbération des échos de cibles dans différents...2012. Introduction ou contexte : RDDC Atlantique a élaboré un modèle de fouillis d’échos acoustiques fondé sur les modes normaux adiabatiques pour

  10. A Hybrid 3D Colon Segmentation Method Using Modified Geometric Deformable Models

    Directory of Open Access Journals (Sweden)

    S. Falahieh Hamidpour

    2007-06-01

    Full Text Available Introduction: Nowadays virtual colonoscopy has become a reliable and efficient method of detecting primary stages of colon cancer such as polyp detection. One of the most important and crucial stages of virtual colonoscopy is colon segmentation because an incorrect segmentation may lead to a misdiagnosis.  Materials and Methods: In this work, a hybrid method based on Geometric Deformable Models (GDM in combination with an advanced region growing and thresholding methods is proposed. GDM are found to be an attractive tool for structural based image segmentation particularly for extracting the objects with complicated topology. There are two main parameters influencing the overall performance of GDM algorithm; the distance between the initial contour and the actual object’s contours and secondly the stopping term which controls the deformation. To overcome these limitations, a two stage hybrid based segmentation method is suggested to extract the rough but precise initial contours at the first stage of the segmentation. The extracted boundaries are smoothed and improved using a modified GDM algorithm by improving the stopping terms of the algorithm based on the gradient value of image voxels. Results: The proposed algorithm was implemented on forty data sets each containing 400-480 slices. The results show an improvement in the accuracy and smoothness of the extracted boundaries. The improvement obtained for the accuracy of segmentation is about 6% in comparison to the one achieved by the methods based on thresholding and region growing only. Discussion and Conclusion: The extracted contours using modified GDM are smoother and finer. The improvement achieved in this work on the performance of stopping function of GDM model together with applying two stage segmentation of boundaries have resulted in a great improvement on the computational efficiency of GDM algorithm while making smoother and finer colon borders.

  11. Current status of uncertainty analysis methods for computer models

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu

    1989-11-01

    This report surveys several existing uncertainty analysis methods for estimating computer output uncertainty caused by input uncertainties, illustrating application examples of those methods to three computer models, MARCH/CORRAL II, TERFOC and SPARC. Merits and limitations of the methods are assessed in the application, and recommendation for selecting uncertainty analysis methods is provided. (author)

  12. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  13. Laser filamentation mathematical methods and models

    CERN Document Server

    Lorin, Emmanuel; Moloney, Jerome

    2016-01-01

    This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...

  14. A compositional method to model dependent failure behavior based on PoF models

    Directory of Open Access Journals (Sweden)

    Zhiguo ZENG

    2017-10-01

    Full Text Available In this paper, a new method is developed to model dependent failure behavior among failure mechanisms. Unlike the existing methods, the developed method models the root cause of the dependency explicitly, so that a deterministic model, rather than a probabilistic one, can be established. Three steps comprise the developed method. First, physics-of-failure (PoF models are utilized to model each failure mechanism. Then, interactions among failure mechanisms are modeled as a combination of three basic relations, competition, superposition and coupling. This is the reason why the method is referred to as “compositional method”. Finally, the PoF models and the interaction model are combined to develop a deterministic model of the dependent failure behavior. As a demonstration, the method is applied on an actual spool and the developed failure behavior model is validated by a wear test. The result demonstrates that the compositional method is an effective way to model dependent failure behavior.

  15. METHODICAL MODEL FOR TEACHING BASIC SKI TURN

    Directory of Open Access Journals (Sweden)

    Danijela Kuna

    2013-07-01

    Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.

  16. A Parametric Modelling Method for Dexterous Finger Reachable Workspaces

    Directory of Open Access Journals (Sweden)

    Wenzhen Yang

    2016-03-01

    Full Text Available The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finger reachable workspace, and uses a joint feature recognition algorithm to extract the kinematical parameters of the dexterous finger. Compared with graphic, analytical and numerical methods, this parametric modelling method can automatically and conveniently construct a more vivid workspace's forms and contours of the dexterous finger. The main contribution of this paper is that a workspace-modelling tool with high interactive efficiency is developed for designers to precisely visualize the dexterous-finger reachable workspace, which is valuable for analysing the flexibility of the dexterous finger.

  17. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  18. Modeling shallow water flows using the discontinuous galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  19. Hydrogeological characterization of Back Forty area, Albany Research Center, Albany, Oregon

    International Nuclear Information System (INIS)

    Tsai, S.Y.; Smith, W.H.

    1983-12-01

    Radiological surveys were conducted to determine the potential migration of radionuclides from the waste area to the area commonly referred to as the Back Forty, located in the southern portion of the ARC site. The survey results indicated that parts of the Back Forty contain soils contaminated with uranium, thorium, and their associated decay products. A hydrogeologic characterization study was conducted at the Back Forty as part of an effort to more thoroughly assess radionuclide migration in the area. The objectives of the study were: (1) to define the soil characteristics and stratigraphy at the site, (2) to describe the general conditions of each geologic unit, and (3) to determine the direction and hydraulic gradient of areal groundwater flow. The site investigation activities included literature review of existing hydrogeological data for the Albany area, onsite borehold drilling, and measurement of groundwater levels. 7 references, 9 figures, 2 tables

  20. "Method, system and storage medium for generating virtual brick models"

    DEFF Research Database (Denmark)

    2009-01-01

    An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...... of the original virtual brick model. Routines are provided for both storing user created building steps in and generating automated building instructions for virtual brick models, generating a bill of materials for a virtual brick model and ordering physical bricks corresponding to a virtual brick model....

  1. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  2. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  3. Analysis of image findings in forty-one patients with primary lymphoma of the bone

    International Nuclear Information System (INIS)

    Yu Baohai; Liu Jie; Zhong Zhiwei; Zhao Jingpin; Peng Zhigang; Liu Jicun; Wu Wenjuan

    2011-01-01

    Objective: To analyze the imaging features of primary lymphoma of the bone, and discuss the special feature of the 'floating ice sign'. Methods: Forty-one cases of primary lymphoma of the bone in our unit from 1963.1-2009.6 were retrospectively studied. All 41 patients underwent X-ray examination, and 20 patients underwent CT examination, 12 patients underwent MR examination (3 cases simultaneously with enhancement). Results: Involvement of the flat bone was seen in 12 cases. Vertebral column was affected in 8 cases, and 17 cases showed lesions in long bones and irregular bones were involved in 4 cases. The most common location was the femur (10, 24.4%), followed by the ilium (8, 19.5%). Lesions were found in the metaphyses of the long bone in 11 cases (64.7%). 'Floating ice sign' was showed in the calcaneus of 2 patients and in the lumbar vertebra of 2 cases respectively, accounted for 9.8% of all cases. Slight bone destruction with soft tissue mass on CT image could be found in 12 cases, while obvious soft tissue mass was found in 9 cases. No periosteal reaction was found in 37 cases (90.2%). MRI examinations of 12 patients revealed soft tissue mass in 10 patients, and the extent of the lesion was larger in MR than CT. One case showed extensive bone destruction on MR but inconspicuous bone destruction on X-ray plain film and CT scan. Conclusion: Slight bone destruction with conspicuous soft tissue mass, conspicuous bone destruction on MR but slight or inconspicuous bone destruction on X-ray film and CT, could strongly imply the diagnosis of primary lymphoma of the bone. 'Floating ice sign' was a special imaging feature of primary lymphoma of the bone, which could be used as a clue for the diagnosis of lymphoma. (authors)

  4. Simulation of arc models with the block modelling method

    NARCIS (Netherlands)

    Thomas, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.

    2015-01-01

    Simulation of current interruption is currently performed with non-ideal switching devices for large power systems. Nevertheless, for small networks, non-ideal switching devices can be substituted by arc models. However, this substitution has a negative impact on the computation time. At the same

  5. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  6. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    . A method to obtain an optimized number of clusters is outlined. Based upon the cluster's characteristics, a behavioural model is formulated in terms of a rule-base and an inference engine. The article reviews several variants for the model formulation. Some limitations of the methods are listed......Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate...

  7. [Model transfer method based on support vector machine].

    Science.gov (United States)

    Xiong, Yu-hong; Wen, Zhi-yu; Liang, Yu-qian; Chen, Qin; Zhang, Bo; Liu, Yu; Xiang, Xian-yi

    2007-01-01

    The model transfer is a basic method to build up universal and comparable performance of spectrometer data by seeking a mathematical transformation relation among different spectrometers. Because of nonlinear effect and small calibration sample set in fact, it is important to solve the problem of model transfer under the condition of nonlinear effect in evidence and small sample set. This paper summarizes support vector machines theory, puts forward the method of model transfer based on support vector machine and piecewise direct standardization, and makes use of computer simulation method, giving a example to explain the method and compare it with artificial neural network in the end.

  8. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused. In this project a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its new model import and export capabilities, have been developed. The new feature for model transfer...... has been developed by establishing a connection with an external modelling environment for code generation. The main contribution of this thesis is a creation of modelling templates and their connection with other modelling tools within a modelling framework. The goal was to create a user...

  9. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  10. Model-Based Methods in the Biopharmaceutical Process Lifecycle.

    Science.gov (United States)

    Kroll, Paul; Hofer, Alexandra; Ulonska, Sophia; Kager, Julian; Herwig, Christoph

    2017-12-01

    Model-based methods are increasingly used in all areas of biopharmaceutical process technology. They can be applied in the field of experimental design, process characterization, process design, monitoring and control. Benefits of these methods are lower experimental effort, process transparency, clear rationality behind decisions and increased process robustness. The possibility of applying methods adopted from different scientific domains accelerates this trend further. In addition, model-based methods can help to implement regulatory requirements as suggested by recent Quality by Design and validation initiatives. The aim of this review is to give an overview of the state of the art of model-based methods, their applications, further challenges and possible solutions in the biopharmaceutical process life cycle. Today, despite these advantages, the potential of model-based methods is still not fully exhausted in bioprocess technology. This is due to a lack of (i) acceptance of the users, (ii) user-friendly tools provided by existing methods, (iii) implementation in existing process control systems and (iv) clear workflows to set up specific process models. We propose that model-based methods be applied throughout the lifecycle of a biopharmaceutical process, starting with the set-up of a process model, which is used for monitoring and control of process parameters, and ending with continuous and iterative process improvement via data mining techniques.

  11. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified ... 2001;. Keywords. Surrogate modelling; simulation optimization; groundwater remediation; polynomial regression; radial basis .... silty clay with a thickness of 1–2 m, while the lower part is made up of ...

  12. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  13. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    Surrogate modelling is an effective tool for reducing computational burden of simulation optimization. In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified ...

  14. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  15. Monte carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  16. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  17. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  18. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  19. Method of modeling the cognitive radio using Opnet Modeler

    OpenAIRE

    Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.

    2012-01-01

    This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...

  20. Promoting Educational Success Forty-Five Years after "Brown." Special Issue on Education.

    Science.gov (United States)

    Southern Changes, 1999

    1999-01-01

    Forty-five years after the "Brown v. Board of Education" decision, the United States still faces the realities of institutional resistance to change. This special issue reviews the past decade of work by the Southern Regional Council to overcome inequality in education in the context of that organization's long struggle. Selections…

  1. Non-monotonic modelling from initial requirements: a proposal and comparison with monotonic modelling methods

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wupper, H.; Wieringa, Roelf J.

    2008-01-01

    Researchers make a significant effort to develop new modelling languages and tools. However, they spend less effort developing methods for constructing models using these languages and tools. We are developing a method for building an embedded system model for formal verification. Our method

  2. Domain decomposition methods in FVM approach to gravity field modelling.

    Science.gov (United States)

    Macák, Marek

    2017-04-01

    The finite volume method (FVM) as a numerical method can be straightforwardly implemented for global or local gravity field modelling. This discretization method solves the geodetic boundary value problems in a space domain. In order to obtain precise numerical solutions, it usually requires very refined discretization leading to large-scale parallel computations. To optimize such computations, we present a special class of numerical techniques that are based on a physical decomposition of the global solution domain. The domain decomposition (DD) methods like the Multiplicative Schwarz Method and Additive Schwarz Method are very efficient methods for solving partial differential equations. We briefly present their mathematical formulations and we test their efficiency. Presented numerical experiments are dealing with gravity field modelling. Since there is no need to solve special interface problems between neighbouring subdomains, in our applications we use the overlapping DD methods.

  3. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  4. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  5. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  6. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  7. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  8. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  9. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  10. A verification system survival probability assessment model test methods

    International Nuclear Information System (INIS)

    Jia Rui; Wu Qiang; Fu Jiwei; Cao Leituan; Zhang Junnan

    2014-01-01

    Subject to the limitations of funding and test conditions, the number of sub-samples of large complex system test less often. Under the single sample conditions, how to make an accurate evaluation of the performance, it is important for reinforcement of complex systems. It will be able to significantly improve the technical maturity of the assessment model, if that can experimental validation and evaluation model. In this paper, a verification system survival probability assessment model test method, the method by the test system sample test results, verify the correctness of the assessment model and a priori information. (authors)

  11. Modeling of indoor/outdoor fungi relationships in forty-four homes

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, M.J.

    1996-12-31

    From April through October 1994, a study was conducted in the Moline, Illinois-Bettendorf, Iowa area to measure bioaerosol concentrations in 44 homes housing a total of 54 asthmatic individuals. Air was sampled 3 to 10 times at each home over a period of seven months. A total of 852 pairs of individual samples were collected indoors at up to three locations (basement, kitchen, bedroom, or living room) and outside within two meters of each house.

  12. Models and estimation methods for clinical HIV-1 data

    Science.gov (United States)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  13. Compositions and methods for modeling Saccharomyces cerevisiae metabolism

    DEFF Research Database (Denmark)

    2012-01-01

    The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S. cerevisiae reactants to a plurality of S. cerevisiae reactions, a constraint set for the plurality of S. cerevisiae reactions......, and commands for determining a distribution of flux through the reactions that is predictive of a S. cerevisiae physiological function. A model of the invention can further include a gene database containing information characterizing the associated gene or genes. The invention further provides methods...... for making an in silica S. cerevisiae model and methods for determining a S. cerevisiae physiological function using a model of the invention. The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S...

  14. Models and methods for hot spot safety work

    DEFF Research Database (Denmark)

    Vistisen, Dorte

    2002-01-01

    and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time...... is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... are derived. The proposed models are shown to describe variation in accident counts better than the models currently at use in Denmark. The parameters of the models are estimated for the national and regional road network using data from the Road Sector Information system, VIS. No specific accident models...

  15. Mean photon number dependent variational method to the Rabi model

    International Nuclear Information System (INIS)

    Liu, Maoxin; Ying, Zu-Jian; Luo, Hong-Gang; An, Jun-Hong

    2015-01-01

    We present a mean photon number dependent variational method, which works well in the whole coupling regime if the photon energy is dominant over the spin-flipping, to evaluate the properties of the Rabi model for both the ground state and excited states. For the ground state, it is shown that the previous approximate methods, the generalized rotating-wave approximation (only working well in the strong coupling limit) and the generalized variational method (only working well in the weak coupling limit), can be recovered in the corresponding coupling limits. The key point of our method is to tailor the merits of these two existing methods by introducing a mean photon number dependent variational parameter. For the excited states, our method yields considerable improvements over the generalized rotating-wave approximation. The variational method proposed could be readily applied to more complex models, for which it is difficult to formulate an analytic formula. (paper)

  16. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  17. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...

  18. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.

    2000-01-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures,

  19. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  20. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  1. Isolation of acetylated bile acids from the sponge Siphonochalina fortis and DNA damage evaluation by the comet assay.

    Science.gov (United States)

    Patiño Cano, Laura P; Bartolotta, Susana A; Casanova, Natalia A; Siless, Gastón E; Portmann, Erika; Schejter, Laura; Palermo, Jorge A; Carballo, Marta A

    2013-10-01

    From the organic extracts of the sponge Siphonochalina fortis, collected at Bahía Bustamante, Chubut, Argentina, three major compounds were isolated and identified as deoxycholic acid 3, 12-diacetate (1), cholic acid 3, 7, 12-triacetate (2) and cholic acid, 3, 7, 12-triacetate. (3). This is the first report of acetylated bile acids in sponges and the first isolation of compound 3 as a natural product. The potential induction of DNA lesions by the isolated compounds was investigated using the comet assay in lymphocytes of human peripheral blood as in vitro model. The results showed that the administration of the bile acid derivatives would not induce DNA damages, indicating that acetylated bile acids are nontoxic metabolites at the tested concentrations. Since the free bile acids were not detected, it is unlikely that the acetylated compounds may be part of the sponge cells detoxification mechanisms. These results may suggest a possible role of acetylated bile acids as a chemical defense mechanism, product of a symbiotic relationship with microorganisms, which would explain their seasonal and geographical variation, and their influence on the previously observed genotoxicity of the organic extract of S. fortis. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Turbulence modeling methods for the compressible Navier-Stokes equations

    Science.gov (United States)

    Coakley, T. J.

    1983-01-01

    Turbulence modeling methods for the compressible Navier-Stokes equations, including several zero- and two-equation eddy-viscosity models, are described and applied. Advantages and disadvantages of the models are discussed with respect to mathematical simplicity, conformity with physical theory, and numerical compatibility with methods. A new two-equation model is introduced which shows advantages over other two-equation models with regard to numerical compatibility and the ability to predict low-Reynolds-number transitional phenomena. Calculations of various transonic airfoil flows are compared with experimental results. A new implicit upwind-differencing method is used which enhances numerical stability and accuracy, and leads to rapidly convergent steady-state solutions.

  3. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  4. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  5. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  6. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  7. FORTY PLUS CLUBS AND WHITE-COLLAR MANHOOD DURING THE GREAT DEPRESSION

    Directory of Open Access Journals (Sweden)

    Gregory Wood

    2008-01-01

    Full Text Available As scholars of gender and labor have argued, chronic unemployment during the Great Depression precipitated a “crisis” of masculinity, compelling men to turn towards new industrial unions and the New Deal as ways to affirm work, breadwinning, and patriarchy as bases for manhood. But did all men experience this crisis? During the late 1930s, white-collar men organized groups called “Forty Plus Clubs” in response to their worries about joblessness and manhood. The clubs made it possible for unemployed executives to find new jobs, while at the same time recreating the male-dominated culture of the white-collar office. For male executives, Forty Plus Clubs precluded the Depression-era crisis of manhood, challenging the idea that the absence ofpaid employment was synonymous with the loss of masculinity.

  8. Risoe National Laboratory - Forty years of research in a changing society

    International Nuclear Information System (INIS)

    Nielsen, H.; Nielsen, K.; Petersen, F.; Siggaard Jensen, H.

    1998-01-01

    The creation of Risoe forty years ago was one of the largest, single investments in Danish research. The intention was to realise Niels Bohr's visions of the peaceful use in Denmark og nuclear energy for electricity production and other purposes. Risoe decided to take the opportunity of its 40th anniversary in 1998 to have its history written in a form that would contribute to the history of modern Denmark. The result was a book in Danish entitled Til samfundets tarv - Forskningscenter Risoes historie. The present text is a slightly reworked translation of the last chapter of that book. It contains a summary of Risoe's history and some reflections on forty years of change. Change in Danish society at large, in research policy, in energy policy, in technological expectations. Changes at Risoe, in leadership, in organisational structure, in strategy and in fields of research. Some of Risoe's largest projects are briefly characterised. (LN)

  9. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  10. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  11. Attitude Research in Science Education: Contemporary Models and Methods.

    Science.gov (United States)

    Crawley, Frank E.; Kobala, Thomas R., Jr.

    1994-01-01

    Presents a summary of models and methods of attitude research which are embedded in the theoretical tenets of social psychology and in the broader framework of constructivism. Focuses on the construction of social reality rather than the construction of physical reality. Models include theory of reasoned action, theory of planned behavior, and…

  12. Introduction to Discrete Element Methods: Basics of Contact Force Models

    NARCIS (Netherlands)

    Luding, Stefan

    2008-01-01

    One challenge of today's research is the realistic simulation of granular materials, like sand or powders, consisting of millions of particles. In this article, the discrete element method (DEM), as based on molecular dynamics methods, is introduced. Contact models are at the physical basis of DEM.

  13. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  14. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  15. Forty years on and still going strong : the use of hominin-cercopithecid comparisons in palaeoanthropology.

    OpenAIRE

    Elton, S.

    2006-01-01

    Hominin-cercopithecid comparisons have been used in palaeoanthropology for over forty years. Fossil cercopithecids can be used as a ‘control group’ to contextualize the adaptations and evolutionary trends of hominins. Observations made on modern cercopithecids can also be applied to questions about human evolution. This article reviews the history of hominin-cercopithecid comparisons, assesses the strengths and weaknesses of cercopithecids as comparators in studies of human evolution, and use...

  16. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  17. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  18. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  19. Projection methods for the numerical solution of Markov chain models

    Science.gov (United States)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  20. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  1. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  2. Alternative methods to model frictional contact surfaces using NASTRAN

    Science.gov (United States)

    Hoang, Joseph

    1992-01-01

    Elongated (slotted) holes have been used extensively for the integration of equipment into Spacelab racks. In the past, this type of interface has been modeled assuming that there is not slippage between contact surfaces, or that there is no load transfer in the direction of the slot. Since the contact surfaces are bolted together, the contact friction provides a load path determined by the normal applied force (bolt preload) and the coefficient of friction. Three alternate methods that utilize spring elements, externally applied couples, and stress dependent elements are examined to model the contacted surfaces. Results of these methods are compared with results obtained from methods that use GAP elements and rigid elements.

  3. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  4. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  5. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  6. Coarse Analysis of Microscopic Models using Equation-Free Methods

    DEFF Research Database (Denmark)

    Marschler, Christian

    -dimensional models. The goal of this thesis is to investigate such high-dimensional multiscale models and extract relevant low-dimensional information from them. Recently developed mathematical tools allow to reach this goal: a combination of so-called equation-free methods with numerical bifurcation analysis...... using short simulation bursts of computationally-expensive complex models. Those information is subsequently used to construct bifurcation diagrams that show the parameter dependence of solutions of the system. The methods developed for this thesis have been applied to a wide range of relevant problems....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...

  7. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  8. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted....

  9. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  10. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  11. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  12. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  13. Rocket and Laboratory Experiments in Astrophysics— Validation and Verification of the Next Generation FORTIS

    Science.gov (United States)

    McCandliss, Stephan

    We submit herein a proposal describing plans for further development of a Next Generation Far-UV Off Rowland-circle Telescope for Imaging and Spectroscopy (FORTIS). The goal of the proposal is to demonstrate the scientific utility of multiobject spectroscopy over wide angular fields in the far-UV with investigations of: the blue straggler population in the Globular Cluster M10; low metallicity star formation in the Magellanic Bridge; shock structures Cygnus Loop supernova remnant; a search for unidentified emissions in star-forming galaxies; and potentially an, as yet, unnamed comet as a target of opportunity. FORTIS is a pathfinder for developing the technologies necessary to enable far-UV spectroscopic surveys. Such surveys will allow us to probe problems relevant to the formation of large scale structures, the origin and evolution of galaxies, and the formation and evolution of stars from interstellar gas. In combination with existing and future spectroscopic surveys, they will provide a complete and compelling panchromatic picture of the observable universe. Next generation FORTIS will fly as a sounding rocket borne instrument and incorporate a number of unique technologies, including the Next Generation MicroShutter Array (NGMSA), which provides for the simultaneous acquisition of spectra from multiple objects within a wide angular field. The NGMSA will be controlled by an autonomous targeting system capable of identifying multiple objects on-the-fly for further spectral analysis in the short time afforded to far-UV observations from a sounding rocket 400 seconds. We will also incorporate long life microchannel plate (MCP) detectors that have have high open area ratios, providing for increased quantum efficiency, and improved resistance to gain sag, allowing operation at higher count rate. Recent flight experience with the first generation FORTIS has provided guidance to improving the science return of the next generation FORTIS. Our plans for a rigorous

  14. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  15. Modeling Music Emotion Judgments Using Machine Learning Methods.

    Science.gov (United States)

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  16. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  17. [Comparison of Two Methods of Lidocaine Administrating for Neuroprotection in Rabbit Model of Subarachnoid Hemorrhage].

    Science.gov (United States)

    Shi, Xian-Qing; Fu, Yong-Jian; Zheng, Li-Rong

    2017-01-01

    To compare the neuroprotection effect of two methods of Lidocaine administration in rabbit model of subarachnoid hemorrhage. Forty New Zealand white rabbits were randomly divided into sham group, subarachnoid hemorrhage group (SAH), Lidocaine intravenous injection group (L1), and Lidocaine intracisternal administration group (L2). The rabbits were given general anaesthesia, then 1.5 mL autologous nonheparinized arterial blood was injected into cisterna magna to establish SAH model, while 1.5 mL saline was used in sham group. Thirty minutes later, the rabbits in L1 and L2 group respectively received 0.3 mL 2% Lidocaine administration of intravenously and intracisternally injection. All animals were sacrificed at 72 h after SAH. The samples of basilar artery and hippocampus tissue were processed for morphometric analysis. At pre-operation and 72 h after SAH, the level of interleukin-6 (IL-6) in serum was measured. HE staining and C fos immunohistochemical staining were performed in L1 and L2 groups. Artery area and artery diameter of basal arteries, normal neuron density and C-fos positive cell in hippocampus were measured at 72 h after SAH. The baseline level of IL-6 was not significant different in four groups ( P >0.05). The level of IL-6 at 72 h after SAH was significantly higher than that at pre-operation in SAH, L1 and L2 groups ( P L1 group was higher than that in L2 group ( P L2 group, the cross-section area and diameter of basal artery were smaller in SAH and L1 group, while the normal neuron density of hippocampus was less ( P <0.05). Intracisternal administration of Lidocaine could provide neuroprotection in rabbit model of subarachnoid hemorrhage.

  18. Modeling of piezoelectric devices with the finite volume method.

    Science.gov (United States)

    Bolborici, Valentin; Dawson, Francis; Pugh, Mary

    2010-07-01

    A partial differential equation (PDE) model for the dynamics of a thin piezoelectric plate in an electric field is presented. This PDE model is discretized via the finite volume method (FVM), resulting in a system of coupled ordinary differential equations. A static analysis and an eigenfrequency analysis are done with results compared with those provided by a commercial finite element (FEM) package. We find that fewer degrees of freedom are needed with the FVM model to reach a specified degree of accuracy. This suggests that the FVM model, which also has the advantage of an intuitive interpretation in terms of electrical circuits, may be a better choice in control situations.

  19. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  20. Theory of model Hamiltonians and method of functional integration

    International Nuclear Information System (INIS)

    Popov, V.N.

    1990-01-01

    Results on application of functional integration method to statistical physics systems with model Hamiltonians Dicke and Bardeen-Cooper-Schrieffer (BCS) are presented. Representations of statistical sums of these functional integration models are obtained. Asymptotic formulae (in N → ∞ thermodynamic range) for statistical sums of various modifications of the Dicke model as well as for the Green functions and Bose-excitations collective spectrum are exactly proved. Analogous results without exact substantiation are obtained for statistical sums and spectrum of Bose-excitations of the BCS model. 21 refs

  1. Comparison of Different Calibration Methods in a Non-invasive ICP Assessment Model.

    Science.gov (United States)

    Schmidt, Bernhard; Cardim, Danilo; Weinhold, Marco; Streif, Stefan; McLeod, Damian D; Czosnyka, Marek; Klingelhöfer, Jürgen

    2018-01-01

    Previously we described the method of continuous intracranial pressure (ICP) estimation using arterial blood pressure (ABP) and cerebral blood flow velocity (CBFV). The model was constructed using reference patient data. Various individual calibration strategies were used in the current attempt to improve the accuracy of this non-invasive ICP (nICP) assessment tool. Forty-one patients (mean, 52 years; range, 18-77 years) with severe brain injuries were studied. CBFV in the middle cerebral artery (MCA), ABP and invasively assessed ICP were simultaneously recorded for 1 h. Recording was repeated at days 2, 4 and 7. In the first recording, invasively assessed ICP was recorded to calibrate the nICP procedure by means of either a constant shift of nICP (snICP), a constant shift of nICP/ABP ratio (anICP) or by including this recording for a model reconstruction (cnICP). At follow-up days, the calibrated nICP procedures were applied and the results compared to the original nICP. In 76 follow-up recordings, the mean differences (Bias), the SD and the mean absolute differences (ΔICP) between ICP and the nICP methods were (in mmHg): nICP, -5.6 ± 5.72, 6.5; snICP, +0.7 ± 6.98, 5.5, n.s.; anICP, +1.0 ± 7.22, 5.6, n.s.; cnICP, -3.4 ± 5.68, 5.4, p ICP. This overestimation could be reduced by cnICP calibration, but not completely avoided. Constant shift calibrations (snICP, anICP) decrease the Bias to ICP, but increase SD and, therefore, increase the 95% confidence interval (CI = 2 × SD). This calibration method cannot be recommended. Compared to nICP, the cnICP method reduced the Bias and slightly reduced SD, and showed significantly decreased ΔICP. Compared to snICP and anICP, the Bias was higher. This effect was probably caused by the patients with craniotomy. The cnICP calibration method using initial recordings for model reconstruction showed the best results.

  2. Interactive Modelling of Shapes Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations that are ......In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations...... which are suitable for shape modelling are proposed. However, normally these would result in tools that would a ect the entire model. To facilitate local changes to the model, we introduce a windowing scheme which constrains the {LSM} to a ect only a small part of the model. The {LSM} based sculpting...... tools have been incorporated in our sculpting system which also includes facilities for volumetric {CSG} and several techniques for visualization....

  3. A statistical method for descriminating between alternative radiobiological models

    International Nuclear Information System (INIS)

    Kinsella, I.A.; Malone, J.F.

    1977-01-01

    Radiobiological models assist understanding of the development of radiation damage, and may provide a basis for extrapolating dose-effect curves from high to low dose regions. Many models have been proposed such as multitarget and its modifications, enzymatic models, and those with a quadratic dose response relationship (i.e. αD + βD 2 forms). It is difficult to distinguish between these because the statistical techniques used are almost always limited, in that one method can rarely be applied to the whole range of models. A general statistical procedure for parameter estimation (Maximum Liklihood Method) has been found applicable to a wide range of radiobiological models. The curve parameters are estimated using a computerised search that continues until the most likely set of values to fit the data is obtained. When the search is complete two procedures are carried out. First a goodness of fit test is applied which examines the applicability of an individual model to the data. Secondly an index is derived which provides an indication of the adequacy of any model compared with alternative models. Thus the models may be ranked according to how well they fit the data. For example, with one set of data, multitarget types were found to be more suitable than quadratic types (αD + βD 2 ). This method should be of assitance is evaluating various models. It may also be profitably applied to selection of the most appropriate model to use, when it is necessary to extrapolate from high to low doses

  4. Modeling Water Quality Parameters Using Data-driven Methods

    Directory of Open Access Journals (Sweden)

    Shima Soleimani

    2017-02-01

    Full Text Available Introduction: Surface water bodies are the most easily available water resources. Increase use and waste water withdrawal of surface water causes drastic changes in surface water quality. Water quality, importance as the most vulnerable and important water supply resources is absolutely clear. Unfortunately, in the recent years because of city population increase, economical improvement, and industrial product increase, entry of pollutants to water bodies has been increased. According to that water quality parameters express physical, chemical, and biological water features. So the importance of water quality monitoring is necessary more than before. Each of various uses of water, such as agriculture, drinking, industry, and aquaculture needs the water with a special quality. In the other hand, the exact estimation of concentration of water quality parameter is significant. Material and Methods: In this research, first two input variable models as selection methods (namely, correlation coefficient and principal component analysis were applied to select the model inputs. Data processing is consisting of three steps, (1 data considering, (2 identification of input data which have efficient on output data, and (3 selecting the training and testing data. Genetic Algorithm-Least Square Support Vector Regression (GA-LSSVR algorithm were developed to model the water quality parameters. In the LSSVR method is assumed that the relationship between input and output variables is nonlinear, but by using a nonlinear mapping relation can create a space which is named feature space in which relationship between input and output variables is defined linear. The developed algorithm is able to gain maximize the accuracy of the LSSVR method with auto LSSVR parameters. Genetic algorithm (GA is one of evolutionary algorithm which automatically can find the optimum coefficient of Least Square Support Vector Regression (LSSVR. The GA-LSSVR algorithm was employed to

  5. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  6. Numerical modeling of spray combustion with an advanced VOF method

    Science.gov (United States)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  7. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  8. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  9. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  10. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  11. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...

  12. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  13. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  14. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  15. Effect of population density on reproduction in Microtus fortis under laboratory conditions.

    Science.gov (United States)

    Han, Qunhua; Zhang, Meiwen; Guo, Cong; Shen, Guo; Wang, Yong; Li, Bo; Xu, Zhenggang

    2014-06-01

    Between December 2011 and March 2012, the reproductive characteristics of Microtus fortis reared in the laboratory at different population densities were assessed. In all, 258 male and female voles were randomly divided into 4 groups and reared at densities of 2, 4, 6, and 8 animals per cage (sex ratio: 1:1). The results showed that the pregnancy rate (χ2 = 21.671, df = 3, P population density groups, but the mean litter size (mean ± SD) was not (F = 2.669, df = 3, P > 0.05). In particular, the reproductive index and sex hormone levels showed a significant difference among the different density groups studied.

  16. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  18. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  19. A delay financial model with stochastic volatility; martingale method

    Science.gov (United States)

    Lee, Min-Ku; Kim, Jeong-Hoon; Kim, Joocheol

    2011-08-01

    In this paper, we extend a delayed geometric Brownian model by adding a stochastic volatility term, which is driven by a hidden process of fast mean reverting diffusion, to the delayed model. Combining a martingale approach and an asymptotic method, we develop a theory for option pricing under this hybrid model. The core result obtained by our work is a proof that a discounted approximate option price can be decomposed as a martingale part plus a small term. Subsequently, a correction effect on the European option price is demonstrated both theoretically and numerically for a good agreement with practical results.

  20. Finite analytic method for modeling variably saturated flows.

    Science.gov (United States)

    Zhang, Zaiyong; Wang, Wenke; Gong, Chengcheng; Yeh, Tian-Chyi Jim; Wang, Zhoufeng; Wang, Yu-Li; Chen, Li

    2018-04-15

    This paper develops a finite analytic method (FAM) for solving the two-dimensional Richards' equation. The FAM incorporates the analytic solution in local elements to formulate the algebraic representation of the partial differential equation of unsaturated flow so as to effectively control both numerical oscillation and dispersion. The FAM model is then verified using four examples, in which the numerical solutions are compared with analytical solutions, solutions from VSAFT2, and observational data from a field experiment. These numerical experiments show that the method is not only accurate but also efficient, when compared with other numerical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Finite element method for incompressible two-fluid model using a fractional step method

    International Nuclear Information System (INIS)

    Uchiyama, Tomomi

    1997-01-01

    This paper presents a finite element method for an incompressible two-fluid model. The solution algorithm is based on the fractional step method, which is frequently used in the finite element calculation for single-phase flows. The calculating domain is divided into quadrilateral elements with four nodes. The Galerkin method is applied to derive the finite element equations. Air-water two-phase flows around a square cylinder are calculated by the finite element method. The calculation demonstrates the close relation between the volumetric fraction of the gas-phase and the vortices shed from the cylinder, which is favorably compared with the existing data. It is also confirmed that the present method allows the calculation with less CPU time than the SMAC finite element method proposed in my previous paper. (author)

  2. Comparison of methods for modelling geomagnetically induced currents

    Science.gov (United States)

    Boteler, D. H.; Pirjola, R. J.

    2014-09-01

    Assessing the geomagnetic hazard to power systems requires reliable modelling of the geomagnetically induced currents (GIC) produced in the power network. This paper compares the Nodal Admittance Matrix method with the Lehtinen-Pirjola method and shows them to be mathematically equivalent. GIC calculation using the Nodal Admittance Matrix method involves three steps: (1) using the voltage sources in the lines representing the induced geoelectric field to calculate equivalent current sources and summing these to obtain the nodal current sources, (2) performing the inversion of the admittance matrix and multiplying by the nodal current sources to obtain the nodal voltages, (3) using the nodal voltages to determine the currents in the lines and in the ground connections. In the Lehtinen-Pirjola method, steps 2 and 3 of the Nodal Admittance Matrix calculation are combined into one matrix expression. This involves inversion of a more complicated matrix but yields the currents to ground directly from the nodal current sources. To calculate GIC in multiple voltage levels of a power system, it is necessary to model the connections between voltage levels, not just the transmission lines and ground connections considered in traditional GIC modelling. Where GIC flow to ground through both the high-voltage and low-voltage windings of a transformer, they share a common path through the substation grounding resistance. This has been modelled previously by including non-zero, off-diagonal elements in the earthing impedance matrix of the Lehtinen-Pirjola method. However, this situation is more easily handled in both the Nodal Admittance Matrix method and the Lehtinen-Pirjola method by introducing a node at the neutral point.

  3. Comparison of methods for modelling geomagnetically induced currents

    Directory of Open Access Journals (Sweden)

    D. H. Boteler

    2014-09-01

    Full Text Available Assessing the geomagnetic hazard to power systems requires reliable modelling of the geomagnetically induced currents (GIC produced in the power network. This paper compares the Nodal Admittance Matrix method with the Lehtinen–Pirjola method and shows them to be mathematically equivalent. GIC calculation using the Nodal Admittance Matrix method involves three steps: (1 using the voltage sources in the lines representing the induced geoelectric field to calculate equivalent current sources and summing these to obtain the nodal current sources, (2 performing the inversion of the admittance matrix and multiplying by the nodal current sources to obtain the nodal voltages, (3 using the nodal voltages to determine the currents in the lines and in the ground connections. In the Lehtinen–Pirjola method, steps 2 and 3 of the Nodal Admittance Matrix calculation are combined into one matrix expression. This involves inversion of a more complicated matrix but yields the currents to ground directly from the nodal current sources. To calculate GIC in multiple voltage levels of a power system, it is necessary to model the connections between voltage levels, not just the transmission lines and ground connections considered in traditional GIC modelling. Where GIC flow to ground through both the high-voltage and low-voltage windings of a transformer, they share a common path through the substation grounding resistance. This has been modelled previously by including non-zero, off-diagonal elements in the earthing impedance matrix of the Lehtinen–Pirjola method. However, this situation is more easily handled in both the Nodal Admittance Matrix method and the Lehtinen–Pirjola method by introducing a node at the neutral point.

  4. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders...... on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame...... the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  5. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  6. Modeling of Information Security Strategic Planning Methods and Expert Assessments

    Directory of Open Access Journals (Sweden)

    Alexander Panteleevich Batsula

    2014-09-01

    Full Text Available The article, paper addresses problem of increasing the level of information security. As a result, a method of increasing the level of information security is developed through its modeling of strategic planning SWOT-analysis using expert assessments.

  7. Ethnographic Decision Tree Modeling: A Research Method for Counseling Psychology.

    Science.gov (United States)

    Beck, Kirk A.

    2005-01-01

    This article describes ethnographic decision tree modeling (EDTM; C. H. Gladwin, 1989) as a mixed method design appropriate for counseling psychology research. EDTM is introduced and located within a postpositivist research paradigm. Decision theory that informs EDTM is reviewed, and the 2 phases of EDTM are highlighted. The 1st phase, model…

  8. Site Structure and User Navigation: Models, Measures and Methods

    NARCIS (Netherlands)

    Herder, E.; van Dijk, Elisabeth M.A.G.; Chen, S.Y; Magoulas, G.D.

    2004-01-01

    The analysis of the structure of Web sites and patterns of user navigation through these sites is gaining attention from different disciplines, as it enables unobtrusive discovery of user needs. In this chapter we give an overview of models, measures, and methods that can be used for analysis

  9. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available The calculation of unsteady air loads is an essential step in any aeroelastic analysis. The subsonic doublet lattice method (DLM) is used extensively for this purpose due to its simplicity and reliability. The body models available with the popular...

  10. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  11. A comparison of two methods for fitting the INDCLUS model

    NARCIS (Netherlands)

    Ten Berge, Jos M.F.; Kiers, Henk A.L.

    2005-01-01

    Chaturvedi and Carroll have proposed the SINDCLUS method for fitting the INDCLUS model. It is based on splitting the two appearances of the cluster matrix in the least squares fit function and relying on convergence to a solution where both cluster matrices coincide. Kiers has proposed an

  12. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  13. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  14. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  15. Cognitive psychology and self-reports: models and methods.

    Science.gov (United States)

    Jobe, Jared B

    2003-05-01

    This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing models are briefly described. Non-experimental methods--expert cognitive review, cognitive task analysis, focus groups, and cognitive interviews--are described. Examples are provided of how these methods were effectively used to identify cognitive self-report issues. Experimental methods--cognitive laboratory experiments, field tests, and experiments embedded in field surveys--are described. Examples are provided of: (a) how laboratory experiments were designed to test the capability and accuracy of respondents in performing the cognitive tasks required to answer self-report questions, (b) how a field experiment was conducted in which a cognitively designed questionnaire was effectively tested against the original questionnaire, and (c) how a cognitive experiment embedded in a field survey was conducted to test cognitive predictions.

  16. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  17. A constructive model potential method for atomic interactions

    Science.gov (United States)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  18. Unicriterion Model: A Qualitative Decision Making Method That Promotes Ethics

    Directory of Open Access Journals (Sweden)

    Fernando Guilherme Silvano Lobo Pimentel

    2011-06-01

    Full Text Available Management decision making methods frequently adopt quantitativemodels of several criteria that bypass the question of whysome criteria are considered more important than others, whichmakes more difficult the task of delivering a transparent viewof preference structure priorities that might promote ethics andlearning and serve as a basis for future decisions. To tackle thisparticular shortcoming of usual methods, an alternative qualitativemethodology of aggregating preferences based on the rankingof criteria is proposed. Such an approach delivers a simpleand transparent model for the solution of each preference conflictfaced during the management decision making process. Themethod proceeds by breaking the decision problem into ‘two criteria– two alternatives’ scenarios, and translating the problem ofchoice between alternatives to a problem of choice between criteriawhenever appropriate. The unicriterion model method is illustratedby its application in a car purchase and a house purchasedecision problem.

  19. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  20. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  1. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  2. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  3. Modeling of radionuclide migration through porous material with meshless method

    International Nuclear Information System (INIS)

    Vrankar, L.; Turk, G.; Runovc, F.

    2005-01-01

    To assess the long term safety of a radioactive waste disposal system, mathematical models are used to describe groundwater flow, chemistry and potential radionuclide migration through geological formations. A number of processes need to be considered when predicting the movement of radionuclides through the geosphere. The most important input data are obtained from field measurements, which are not completely available for all regions of interest. For example, the hydraulic conductivity as an input parameter varies from place to place. In such cases geostatistical science offers a variety of spatial estimation procedures. Methods for solving the solute transport equation can also be classified as Eulerian, Lagrangian and mixed. The numerical solution of partial differential equations (PDE) is usually obtained by finite difference methods (FDM), finite element methods (FEM), or finite volume methods (FVM). Kansa introduced the concept of solving partial differential equations using radial basis functions (RBF) for hyperbolic, parabolic and elliptic PDEs. Our goal was to present a relatively new approach to the modelling of radionuclide migration through the geosphere using radial basis function methods in Eulerian and Lagrangian coordinates. Radionuclide concentrations will also be calculated in heterogeneous and partly heterogeneous 2D porous media. We compared the meshless method with the traditional finite difference scheme. (author)

  4. A review of distributed parameter groundwater management modeling methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  5. Semantic Model Driven Architecture Based Method for Enterprise Application Development

    Science.gov (United States)

    Wu, Minghui; Ying, Jing; Yan, Hui

    Enterprise applications have the requirements of meeting dynamic businesses processes and adopting lasted technologies flexibly, with to solve the problems caused by the nature of heterogeneous characteristic. Service-Oriented Architecture (SOA) is becoming a leading paradigm for business process integration. This research work focuses on business process modeling, proposes a semantic model-driven development method named SMDA combined with the Ontology and Model-Driven Architecture (MDA) technologies. The architecture of SMDA is presented in three orthogonal perspectives. (1) Vertical axis is the MDA 4 layers, the focus is UML profiles in M2 (meta-model layer) for ontology modeling, and three abstract levels: CIM, PIM and PSM modeling respectively. (2) Horizontal axis is different concerns involved in the development: Process, Application, Information, Organization, and Technology. (3) Traversal Axis is referred to aspects that have influence on other models of the cross-cutting axis: Architecture, Semantics, Aspect, and Pattern. The paper also introduces the modeling and transformation process in SMDA, and describes dynamic service composition supports briefly.

  6. Modeling of uncertainty in atmospheric transport system using hybrid method

    International Nuclear Information System (INIS)

    Pandey, M.; Ranade, Ashok; Brij Kumar; Datta, D.

    2012-01-01

    Atmospheric dispersion models are routinely used at nuclear and chemical plants to estimate exposure to the members of the public and occupational workers due to release of hazardous contaminants into the atmosphere. Atmospheric dispersion is a stochastic phenomenon and in general, the concentration of the contaminant estimated at a given time and at a predetermined location downwind of a source cannot be predicted precisely. Uncertainty in atmospheric dispersion model predictions is associated with: 'data' or 'parameter' uncertainty resulting from errors in the data used to execute and evaluate the model, uncertainties in empirical model parameters, and initial and boundary conditions; 'model' or 'structural' uncertainty arising from inaccurate treatment of dynamical and chemical processes, approximate numerical solutions, and internal model errors; and 'stochastic' uncertainty, which results from the turbulent nature of the atmosphere as well as from unpredictability of human activities related to emissions, The possibility theory based on fuzzy measure has been proposed in recent years as an alternative approach to address knowledge uncertainty of a model in situations where available information is too vague to represent the parameters statistically. The paper presents a novel approach (called Hybrid Method) to model knowledge uncertainty in a physical system by a combination of probabilistic and possibilistic representation of parametric uncertainties. As a case study, the proposed approach is applied for estimating the ground level concentration of hazardous contaminant in air due to atmospheric releases through the stack (chimney) of a nuclear plant. The application illustrates the potential of the proposed approach. (author)

  7. Evaluation of internal noise methods for Hotelling observer models

    International Nuclear Information System (INIS)

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-01-01

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality

  8. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  9. Modeling crime events by d-separation method

    Science.gov (United States)

    Aarthee, R.; Ezhilmaran, D.

    2017-11-01

    Problematic legal cases have recently called for a scientifically founded method of dealing with the qualitative and quantitative roles of evidence in a case [1].To deal with quantitative, we proposed a d-separation method for modeling the crime events. A d-separation is a graphical criterion for identifying independence in a directed acyclic graph. By developing a d-separation method, we aim to lay the foundations for the development of a software support tool that can deal with the evidential reasoning in legal cases. Such a tool is meant to be used by a judge or juror, in alliance with various experts who can provide information about the details. This will hopefully improve the communication between judges or jurors and experts. The proposed method used to uncover more valid independencies than any other graphical criterion.

  10. Numerical methods for the Lévy LIBOR model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    2010-01-01

    but the methods are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure....... This enables simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the L\\'evy LIBOR model of Eberlein and \\"Ozkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates...

  11. Numerical Methods for the Lévy LIBOR Model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...

  12. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  13. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  14. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  15. Review of Methods for Buildings Energy Performance Modelling

    Science.gov (United States)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting – replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance

  16. A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.

    Energy Technology Data Exchange (ETDEWEB)

    YUE,M.; SCHLUETER,R.

    2003-10-20

    A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.

  17. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  18. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  19. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  20. Impacts modeling using the SPH particulate method. Case study

    International Nuclear Information System (INIS)

    Debord, R.

    1999-01-01

    The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)

  1. Seamless Method- and Model-based Software and Systems Engineering

    Science.gov (United States)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  2. Finite-element method modeling of hyper-frequency structures

    International Nuclear Information System (INIS)

    Zhang, Min

    1990-01-01

    The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite element method with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr

  3. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  4. A Method for Modeling of Floating Vertical Axis Wind Turbine

    DEFF Research Database (Denmark)

    Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir

    2013-01-01

    It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine...

  5. (Environmental and geophysical modeling, fracture mechanics, and boundary element methods)

    Energy Technology Data Exchange (ETDEWEB)

    Gray, L.J.

    1990-11-09

    Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

  6. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised.     We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...

  7. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  8. Forty-Seven DJs, Four Women: Meritocracy, Talent, and Postfeminist Politics

    Directory of Open Access Journals (Sweden)

    Tami Gadir

    2017-11-01

    Full Text Available In 2016, only four of forty-seven DJs booked for Musikkfest, a festival in Oslo, Norway, were women. Following this, a local DJ published an objection to this imbalance in a local arts and entertainment magazine. Her editorial provoked booking agents to defend their position on the grounds that they prioritise skill and talent when booking DJs, and by implication, that they do not prioritise equality. The booking agents’ responses, on social media and in interviews I conducted, highlight their perpetuation of a status quo in dance music cultures where men disproportionately dominate the role of DJing. Labour laws do not align with this cultural attitude: gender equality legislation in Norway’s recent history contrasts the postfeminist attitudes expressed by dance music’s cultural intermediaries such as DJs and booking agents. The Musikkfest case ultimately shows that gender politics in dance music cultures do not necessarily correspond to dance music’s historical associations with egalitarianism.

  9. Forty years abuse of baking soda, rhabdomyolysis, glomerulonephritis, hypertension leading to renal failure: a case report.

    Science.gov (United States)

    Forslund, Terje; Koistinen, Arvo; Anttinen, Jorma; Wagner, Bodo; Miettinen, Marja

    2008-01-01

    We present a patient who had ingested sodium bicarbonate for treatment of alcoholic dyspepsia during forty years at increasing doses. During the last year he had used more than 50 grams daily. He presented with metabolic alkalosis, epileptic convulsions, subdural hematoma, hypertension and rhabdomyolysis with end stage renal failure, for which he had to be given regular intermittent hemodialysis treatment. Untreated hypertension and glomerulonephritis was probably present prior to all these acute incidents. Examination of the kidney biopsy revealed mesangial proliferative glomerulonephritis and arterial wall thickening causing nephrosclerosis together with interstitial calcinosis. The combination of all these pathologic changes might be responsible for the development of progressive chronic renal failure ending up with the need for continuous intermittent hemodialysis treatment.

  10. RF tunable devices and subsystems methods of modeling, analysis, and applications methods of modeling, analysis, and applications

    CERN Document Server

    Gu, Qizheng

    2015-01-01

    This book serves as a hands-on guide to RF tunable devices, circuits and subsystems. An innovative method of modeling for tunable devices and networks is described, along with a new tuning algorithm, adaptive matching network control approach, and novel filter frequency automatic control loop.  The author provides readers with the necessary background and methods for designing and developing tunable RF networks/circuits and tunable RF font-ends, with an emphasis on applications to cellular communications. ·      Discusses the methods of characterizing, modeling, analyzing, and applying RF tunable devices and subsystems; ·      Explains the necessary methods of utilizing RF tunable devices and subsystems, rather than discussing the RF tunable devices themselves; ·      Presents and applies methods for MEMS tunable capacitors, which can be used for any RF tunable device; ·      Uses analytic methods wherever possible and provides numerous, closed-form solutions; ·      Includ...

  11. Alternative wind power modeling methods using chronological and load duration curve production cost models

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M R

    1996-04-01

    As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.

  12. Ecoimmunity in Darwin's finches: invasive parasites trigger acquired immunity in the medium ground finch (Geospiza fortis.

    Directory of Open Access Journals (Sweden)

    Sarah K Huber

    Full Text Available BACKGROUND: Invasive parasites are a major threat to island populations of animals. Darwin's finches of the Galápagos Islands are under attack by introduced pox virus (Poxvirus avium and nest flies (Philornis downsi. We developed assays for parasite-specific antibody responses in Darwin's finches (Geospiza fortis, to test for relationships between adaptive immune responses to novel parasites and spatial-temporal variation in the occurrence of parasite pressure among G. fortis populations. METHODOLOGY/PRINCIPAL FINDINGS: We developed enzyme-linked immunosorbent assays (ELISAs for the presence of antibodies in the serum of Darwin's finches specific to pox virus or Philornis proteins. We compared antibody levels between bird populations with and without evidence of pox infection (visible lesions, and among birds sampled before nesting (prior to nest-fly exposure versus during nesting (with fly exposure. Birds from the Pox-positive population had higher levels of pox-binding antibodies. Philornis-binding antibody levels were higher in birds sampled during nesting. Female birds, which occupy the nest, had higher Philornis-binding antibody levels than males. The study was limited by an inability to confirm pox exposure independent of obvious lesions. However, the lasting effects of pox infection (e.g., scarring and lost digits were expected to be reliable indicators of prior pox infection. CONCLUSIONS/SIGNIFICANCE: This is the first demonstration, to our knowledge, of parasite-specific antibody responses to multiple classes of parasites in a wild population of birds. Darwin's finches initiated acquired immune responses to novel parasites. Our study has vital implications for invasion biology and ecological immunology. The adaptive immune response of Darwin's finches may help combat the negative effects of parasitism. Alternatively, the physiological cost of mounting such a response could outweigh any benefits, accelerating population decline. Tests

  13. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  14. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  15. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  16. Multigrid Methods for A Mixed Finite Element Method of The Darcy-Forchheimer Model.

    Science.gov (United States)

    Huang, Jian; Chen, Long; Rui, Hongxing

    2018-01-01

    An efficient nonlinear multigrid method for a mixed finite element method of the Darcy-Forchheimer model is constructed in this paper. A Peaceman-Rachford type iteration is used as a smoother to decouple the nonlinearity from the divergence constraint. The nonlinear equation can be solved element-wise with a closed formulae. The linear saddle point system for the constraint is reduced into a symmetric positive definite system of Poisson type. Furthermore an empirical choice of the parameter used in the splitting is proposed and the resulting multigrid method is robust to the so-called Forchheimer number which controls the strength of the nonlinearity. By comparing the number of iterations and CPU time of different solvers in several numerical experiments, our multigrid method is shown to convergent with a rate independent of the mesh size and the Forchheimer number and with a nearly linear computational cost.

  17. Characterization of Forty Seven Years of Particulate Chemical Composition in the Finnish Arctic

    Science.gov (United States)

    Laing, James

    Forty seven years of weekly total suspended particle filters collected at Kevo, Finland from October 1964 through 2010 by the Finnish Meteorological Institute were analyzed for near-total trace elements, soluble trace elements, black carbon (BC), and major ions and methane sulfonic acid (MSA). Kevo is located in Northern Finland, 350 km north of the Arctic Circle. The samples from 1964-1978 were collected with Whatman 42 cellulous filters and the samples from 1979-2010 collected on Whatman GF/A glass-fiber filters. A portion of the filters was microwave acid-digested (ad) and analyzed for near-total trace elements were determined by inductively coupled plasma mass spectrometry (ICP-MS). Another portion was water extracted (we) and analyzed for soluble trace elements by ICP-MS and ionic species by ion chromatography (IC). Black carbon (BC) was determined using optical and thermal optical techniques at SUNY Albany. A clear seasonal trend with winter/spring maxima and summer minima is observed for most species attributed to enhanced transport of pollutants from anthropogenic mid-latitude sources to the Arctic in the winter and early spring. Compared to more remote Arctic sampling sites, species of anthropogenic origin (V, Co, Cu, Ni, As, Cd, Pb, SO4) have significantly higher concentrations and a less pronounced seasonality. High concentrations of Cu (14.1 ng/m3), Ni (0.97 ng/m3), and Co (0.04 ng/m3) indicate the influence of non-ferrous metal smelters on the Kola Peninsula, although Cu unexpectedly did not correlate with Ni or Co. Ni and Co were highly correlated. Significant long-term decreasing trends were detected for most species. All constituents except Sn-ad, Re-ad, Sn-we, Mo-we, V-we, have significant (p sea salt SO4 concentrations were found to have a very similar trend to European and Former Soviet Union SO2 emissions. SO4 concentrations declined dramatically in the early 1990s a result of the collapse of the Soviet Union. Potential source contribution

  18. HyPEP FY06 Report: Models and Methods

    Energy Technology Data Exchange (ETDEWEB)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations and many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the

  19. Chebyshev super spectral viscosity method for a fluidized bed model

    International Nuclear Information System (INIS)

    Sarra, Scott A.

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations

  20. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  1. Chebyshev super spectral viscosity method for a fluidized bed model

    CERN Document Server

    Sarra, S A

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations.

  2. Methods for landslide susceptibility modelling in Lower Austria

    Science.gov (United States)

    Bell, Rainer; Petschko, Helene; Glade, Thomas; Leopold, Philip; Heiss, Gerhard; Proske, Herwig; Granica, Klaus; Schweigl, Joachim; Pomaroli, Gilbert

    2010-05-01

    Landslide susceptibility modelling and implementation of the resulting maps is still a challenge for geoscientists, spatial and infrastructure planners. Particularly on a regional scale landslide processes and their dynamics are poorly understood. Furthermore, the availability of appropriate spatial data in high resolution is often a limiting factor for modelling high quality landslide susceptibility maps for large study areas. However, these maps form an important basis for preventive spatial planning measures. Thus, new methods have to be developed, especially focussing on the implementation of final maps into spatial planning processes. The main objective of the project "MoNOE" (Method development for landslide susceptibility modelling in Lower Austria) is to design a method for landslide susceptibility modelling for a large study area (about 10.200 km²) and to produce landslide susceptibility maps which are finally implemented in the spatial planning strategies of the Federal state of Lower Austria. The project focuses primarily on the landslide types fall and slide. To enable susceptibility modelling, landslide inventories for the respective landslide types must be compiled and relevant data has to be gathered, prepared and homogenized. Based on this data new methods must be developed to tackle the needs of the spatial planning strategies. Considerable efforts will also be spent on the validation of the resulting maps for each landslide type. A great challenge will be the combination of the susceptibility maps for slides and falls in just one single susceptibility map (which is requested by the government) and the definition of the final visualisation. Since numerous landslides have been favoured or even triggered by human impact, the human influence on landslides will also have to be investigated. Furthermore possibilities to integrate respective findings in regional susceptibility modelling will be explored. According to these objectives the project is

  3. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  4. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  5. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  6. Sparse aerosol models beyond the quadrature method of moments

    Science.gov (United States)

    McGraw, Robert

    2013-05-01

    This study examines a class of sparse aerosol models derived from linear programming (LP). The widely used quadrature method of moments (QMOM) is shown to fall into this class. Here it is shown how other sparse aerosol models can be constructed, which are not based on moments of the particle size distribution. The new methods enable one to bound atmospheric aerosol physical and optical properties using arbitrary combinations of model parameters and measurements. Rigorous upper and lower bounds, e.g. on the number of aerosol particles that can activate to form cloud droplets, can be obtained this way from measurement constraints that may include total particle number concentration and size distribution moments. The new LP-based methods allow a much wider range of aerosol properties, such as light backscatter or extinction coefficient, which are not easily connected to particle size moments, to also be assimilated into a list of constraints. Finally, it is shown that many of these more general aerosol properties can be tracked directly in an aerosol dynamics simulation, using SAMs, in much the same way that moments are tracked directly in the QMOM.

  7. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  8. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  9. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    vot reveal any adverse behaviour. In fact the observed traffic seemed very close to what would be expected from Poisson traffic. The Changeover/Changeback procedure in SS7, which is used to redirect traffic in case of link failure, has been analyzed. The transient behaviour during a Changeover...... scenario was modelled using Markovian models. The Ordinary Differential Equations arising from these models were solved numerically. The results obtained seemed very similar to those obtained using a different method in previous work by Akinpelu & Skoog 1985. Recent measurement studies of packet traffic...... is found by noting the close relationship with the expressions for the corresponding infinite queue. For the special case of a batch Poisson arrival process this observation makes it possible to express the queue length at an arbitrary in terms of the corresponding queue lengths for the infinite case....

  10. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  11. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  12. Quantum Monte Carlo method for models of molecular nanodevices

    Science.gov (United States)

    Arrachea, Liliana; Rozenberg, Marcelo J.

    2005-07-01

    We introduce a quantum Monte Carlo technique to calculate exactly at finite temperatures the Green function of a fermionic quantum impurity coupled to a bosonic field. While the algorithm is general, we focus on the single impurity Anderson model coupled to a Holstein phonon as a schematic model for a molecular transistor. We compute the density of states at the impurity in a large range of parameters, to demonstrate the accuracy and efficiency of the method. We also obtain the conductance of the impurity model and analyze different regimes. The results show that even in the case when the effective attractive phonon interaction is larger than the Coulomb repulsion, a Kondo-like conductance behavior might be observed.

  13. Image to Point Cloud Method of 3D-MODELING

    Science.gov (United States)

    Chibunichev, A. G.; Galakhov, V. P.

    2012-07-01

    This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  14. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  15. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  16. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  17. Multicomponent gas mixture air bearing modeling via lattice Boltzmann method

    Science.gov (United States)

    Tae Kim, Woo; Kim, Dehee; Hari Vemuri, Sesha; Kang, Soo-Choon; Seung Chung, Pil; Jhon, Myung S.

    2011-04-01

    As the demand for ultrahigh recording density increases, development of an integrated head disk interface (HDI) modeling tool, which considers the air bearing and lubricant film morphology simultaneously is of paramount importance. To overcome the shortcomings of the existing models based on the modified Reynolds equation (MRE), the lattice Boltzmann method (LBM) is a natural choice in modeling high Knudsen number (Kn) flows owing to its advantages over conventional methods. The transient and parallel nature makes this LBM an attractive tool for the next generation air bearing design. Although LBM has been successfully applied to single component systems, a multicomponent system analysis has been thwarted because of the complexity in coupling the terms for each component. Previous studies have shown good results in modeling immiscible component mixtures by use of an interparticle potential. In this paper, we extend our LBM model to predict the flow rate of high Kn pressure-driven flows in multicomponent gas mixture air bearings, such as the air-helium system. For accurate modeling of slip conditions near the wall, we adopt our LBM scheme with spatially dependent relaxation times for air bearings in HDIs. To verify the accuracy of our code, we tested our scheme via simple two-dimensional benchmark flows. In the pressure-driven flow of an air-helium mixture, we found that the simple linear combination of pure helium and pure air flow rates, based on helium and air mole fraction, gives considerable error when compared to our LBM calculation. Hybridization with the existing MRE database can be adopted with the procedure reported here to develop the state-of-the-art slider design software.

  18. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  19. Setting at point on critical assembly of modelling methods for fast neutron power reactors

    International Nuclear Information System (INIS)

    Zhukov, A.V.; Kazanskij, Y.A.; Kochetkov, A.L.; Matveev, V.I.; Mironovich, Y.N.

    1986-01-01

    In this report the authors examine two modelling methods. In the first method the model presents faithfully the flux distribution. In the second method the reactor model is made by a central mixed oxide fuel surrounded with uranium [fr

  20. IMPROVED NUMERICAL METHODS FOR MODELING RIVER-AQUIFER INTERACTION.

    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll; Sue Tillery; Phillip King

    2008-09-01

    A new option for Local Time-Stepping (LTS) was developed to use in conjunction with the multiple-refined-area grid capability of the U.S. Geological Survey's (USGS) groundwater modeling program, MODFLOW-LGR (MF-LGR). The LTS option allows each local, refined-area grid to simulate multiple stress periods within each stress period of a coarser, regional grid. This option is an alternative to the current method of MF-LGR whereby the refined grids are required to have the same stress period and time-step structure as the coarse grid. The MF-LGR method for simulating multiple-refined grids essentially defines each grid as a complete model, then for each coarse grid time-step, iteratively runs each model until the head and flux changes at the interfacing boundaries of the models are less than some specified tolerances. Use of the LTS option is illustrated in two hypothetical test cases consisting of a dual well pumping system and a hydraulically connected stream-aquifer system, and one field application. Each of the hypothetical test cases was simulated with multiple scenarios including an LTS scenario, which combined a monthly stress period for a coarse grid model with a daily stress period for a refined grid model. The other scenarios simulated various combinations of grid spacing and temporal refinement using standard MODFLOW model constructs. The field application simulated an irrigated corridor along the Lower Rio Grande River in New Mexico, with refinement of a small agricultural area in the irrigated corridor.The results from the LTS scenarios for the hypothetical test cases closely replicated the results from the true scenarios in the refined areas of interest. The head errors of the LTS scenarios were much smaller than from the other scenarios in relation to the true solution, and the run times for the LTS models were three to six times faster than the true models for the dual well and stream-aquifer test cases, respectively. The results of the field

  1. "Storm Alley" on Saturn and "Roaring Forties" on Earth: two bright phenomena of the same origin

    Science.gov (United States)

    Kochemasov, G. G.

    2009-04-01

    "Storm Alley" on Saturn and "Roaring Forties' on Earth: two bright phenomena of the same origin. G. Kochemasov IGEM of the Russian Academy of Sciences, Moscow, Russia, kochem.36@mail.ru Persisting swirling storms around 35 parallel of the southern latitude in the Saturnian atmosphere and famous "Roaring Forties" of the terrestrial hydro- and atmosphere are two bright phenomena that should be explained by the same physical law. The saturnian "Storm Alley" (as it is called by the Cassini scientists) is a stable feature observed also by "Voyager". The Earth's "Roaring Forties" are well known to navigators from very remote times. The wave planetology [1-3 & others] explains this similarity by a fact that both atmospheres belong to rotating globular planets. This means that the tropic and extra-tropic belts of these bodies have differing angular momenta. Belonging to one body these belts, naturally, tend to equilibrate their angular momenta mainly by redistribution of masses and densities [4]. But a perfect equilibration is impossible as long as a rotating body (Saturn or Earth or any other) keeps its globular shape due to mighty gravity. So, a contradiction of tropics and extra-tropics will be forever and the zone mainly between 30 to 50 degrees in both hemispheres always will be a zone of friction, turbulence and strong winds. Some echoes of these events will be felt farther poleward up to 70 degrees. On Earth the Roaring Forties (40˚-50˚) have a continuation in Furious Fifties (50˚-60˚) and Shrieking (Screaming) Sixties (below 60˚, close to Antarctica). Below are some examples of excited atmosphere of Saturn imaged by Cassini. PIA09734 - storms within 46˚ south; PIA09778 - monitoring the Maelstrom, 44˚ north; PIA09787 - northern storms, 59˚ north; PIA09796 - cloud details, 44˚ north; PIA10413 - storms of the high north, 70˚ north; PIA10411 - swirling storms, "Storm Alley", 35˚ south; PIA10457 - keep it rolling, "Storm Alley", 35˚ south; PIA10439 - dance

  2. Modeling of Unsteady Flow through the Canals by Semiexact Method

    Directory of Open Access Journals (Sweden)

    Farshad Ehsani

    2014-01-01

    Full Text Available The study of free-surface and pressurized water flows in channels has many interesting application, one of the most important being the modeling of the phenomena in the area of natural water systems (rivers, estuaries as well as in that of man-made systems (canals, pipes. For the development of major river engineering projects, such as flood prevention and flood control, there is an essential need to have an instrument that be able to model and predict the consequences of any possible phenomenon on the environment and in particular the new hydraulic characteristics of the system. The basic equations expressing hydraulic principles were formulated in the 19th century by Barre de Saint Venant and Valentin Joseph Boussinesq. The original hydraulic model of the Saint Venant equations is written in the form of a system of two partial differential equations and it is derived under the assumption that the flow is one-dimensional, the cross-sectional velocity is uniform, the streamline curvature is small and the pressure distribution is hydrostatic. The St. Venant equations must be solved with continuity equation at the same time. Until now no analytical solution for Saint Venant equations is presented. In this paper the Saint Venant equations and continuity equation are solved with homotopy perturbation method (HPM and comparison by explicit forward finite difference method (FDM. For decreasing the present error between HPM and FDM, the st.venant equations and continuity equation are solved by HAM. The homotopy analysis method (HAM contains the auxiliary parameter ħ that allows us to adjust and control the convergence region of solution series. The study has highlighted the efficiency and capability of HAM in solving Saint Venant equations and modeling of unsteady flow through the rectangular canal that is the goal of this paper and other kinds of canals.

  3. Modeling methods for merging computational and experimental aerodynamic pressure data

    Science.gov (United States)

    Haderlie, Jacob C.

    This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT

  4. Methods for the development of in silico GPCR models

    Science.gov (United States)

    Morales, Paula; Hurst, Dow P.; Reggio, Patricia H.

    2018-01-01

    The Reggio group has constructed computer models of the inactive and G-protein activated states of the cannabinoid CB1 and CB2 receptors, as well, several orphan receptors that recognize a sub-set of cannabinoid compounds, including GPR55 and GPR18. These models have been used to design ligands, mutations and covalent labeling studies. The resultant second generation models have been used to design ligands with improved affinity, efficacy and sub-type selectivity. Herein, we provide a guide for the development of GPCR models using the most recent orphan receptor studied in our lab, GPR3. GPR3 is an orphan receptor that belongs to the Class A family of G-Protein Coupled Receptors. It shares high sequence similarity with GPR6, GPR12, the lysophospholipid receptors, and the cannabinoid receptors. GPR3 is predominantly expressed in mammalian brain and oocytes and it is known as a Gαs-coupled receptor activated constitutively in cells. GPR3 represents a possible target for the treatment of different pathological conditions such as Alzheimer’s disease, oocyte maturation or neuropathic pain. However, the lack of potent and selective GPR3 ligands is delaying the exploitation of this promising therapeutic target. In this context, we aim to develop a homology model that helps us to elucidate the structural determinants governing ligand-receptor interactions at GPR3. In this chapter, we detail the methods and rationale behind the construction of the GPR3 active and inactive state models. These homology models will enable the rational design of novel ligands, which may serve as research tools for further understanding of the biological role of GPR3. PMID:28750813

  5. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  6. Pursuing the method of multiple working hypotheses for hydrological modeling

    Science.gov (United States)

    Clark, M. P.; Kavetski, D.; Fenicia, F.

    2012-12-01

    Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in measuring and representing the heterogeneity encountered in natural systems. This presentation advocates using the method of multiple working hypotheses for systematic and stringent testing of model alternatives in hydrology. We discuss how the multiple hypothesis approach provides the flexibility to formulate alternative representations (hypotheses) describing both individual processes and the overall system. When combined with incisive diagnostics to scrutinize multiple model representations against observed data, this provides hydrologists with a powerful and systematic approach for model development and improvement. Multiple hypothesis frameworks also support a broader coverage of the model hypothesis space and hence improve the quantification of predictive uncertainty arising from system and component non-identifiabilities. As part of discussing the advantages and limitations of multiple hypothesis frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting different types of data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from major uncertainties in environmental data, quantifying regional differences in dominant hydrological processes, and the grander challenge of understanding the self-organization and optimality principles that may functionally explain and describe the heterogeneities evident in most environmental systems. We assess recent progress in these research directions, and how new advances are possible using multiple hypothesis

  7. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  8. Comparative examination of two methods for modeling autoimmune uveitis

    Directory of Open Access Journals (Sweden)

    Svetlana V. Aksenova

    2017-09-01

    Full Text Available Introduction: Uveitis is a disease of the uveal tract, characterized by a variety of causes and clinical manifestations. The internal antigens prevail often in the pathogenesis of the disease and develop the so-called autoimmune reactions. The uveitis treatment has an important medico-social significance because of the high prevalence of uveitis, the significant rate of the disease in young people, and high disability. The article compares the efficiency of two methods for modeling autoimmune uveitis. Materials and Methods: The research was conducted on 6 rabbits of the Chinchilla breed (12 eyes. Two models of experimental uveitis were reproduced on rabbits using normal horse serum during the research. A clinical examination of the inflammatory process course in the eyes was carried out by biomicroscopy using a slit lamp, and a direct ophthalmoscope. Histological and immunological examinations were conducted by the authors of the article. Results: The faster-reproducing and vivid clinical picture of the disease was observed in the second group. The obvious changes in the immunological status of the animals were noted also: an increase in the number of leukocytes, neutrophils, HCT-active neutrophils, and activation of phagocytosis. Discussion and Conclusions: The research has showed that the second model of uveitis is the most convenient working variant, which is characterized by high activity and duration of the inflammatory process in the eye.

  9. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  10. Optimization methods and silicon solar cell numerical models

    Science.gov (United States)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  11. Dimensionality reduction method based on a tensor model

    Science.gov (United States)

    Yan, Ronghua; Peng, Jinye; Ma, Dongmei; Wen, Desheng

    2017-04-01

    Dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis reduces the spectral dimension and does not utilize the spatial information of an HSI. Both spatial and spectral information are used when an HSI is modeled as a tensor, that is, the noise in the spatial dimension is decreased and the dimension in a spectral dimension is reduced simultaneously. However, this model does not consider factors affecting the spectral signatures of ground objects. This means that further improving classification is very difficult. The authors propose that the spectral signatures of ground objects are the composite result of multiple factors, such as illumination, mixture, atmospheric scattering and radiation, and so on. In addition, these factors are very difficult to distinguish. Therefore, these factors are synthesized as within-class factors. Within-class factors, class factors, and pixels are selected to model a third-order tensor. Experimental results indicate that the classification accuracy of the new method is higher than that of the previous methods.

  12. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives.

    Science.gov (United States)

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-04-01

    Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the 'change-in-estimate' (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  13. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  14. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  15. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  16. `Dem DEMs: Comparing Methods of Digital Elevation Model Creation

    Science.gov (United States)

    Rezza, C.; Phillips, C. B.; Cable, M. L.

    2017-12-01

    Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its

  17. Numerical simulations of multicomponent ecological models with adaptive methods.

    Science.gov (United States)

    Owolabi, Kolade M; Patidar, Kailash C

    2016-01-08

    The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that

  18. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    Science.gov (United States)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  19. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  20. Tail modeling in a stretched magnetosphere. I - Methods and transformations

    Science.gov (United States)

    Stern, David P.

    1987-01-01

    A new method is developed for representing the magnetospheric field B as a distorted dipole field. Because Delta-B = 0 must be maintained, such a distortion may be viewed as a transformation of the vector potential A. The simplest form is a one-dimensional 'stretch transformation' along the x axis, concisely represented by the 'stretch function' f(x), which is also a convenient tool for representing features of the substorm cycle. One-dimensional stretch transformations are extended to spherical, cylindrical, and parabolic coordinates and then to arbitrary coordinates. It is shown that distortion transformations can be viewed as mappings of field lines from one pattern to another; the final result only requires knowledge of the field and not of the potentials. General transformations in Cartesian and arbitrary coordinates are derived, and applications to field modeling, field line motion, MHD modeling, and incompressible fluid dynamics are considered.

  1. Engineering models and methods for industrial cell control

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk; Alting, Leo

    1997-01-01

    This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based...... control and monitor-ing systems for production cells. The project participants are The Danish Academy of Technical Sciences, the Institute of Manufacturing Engineering at the Technical University of Denmark and ODENSE STEEL SHIPYARD Ltd.The manufacturing environment and the current practice...... for engineering of cell control systems has been analysed as well as automation software enablers. A number of problems related to these issues are identified.In order to support engineering of cell control systems by the use of enablers, a generic cell control data model and an architecture has been defined...

  2. Computational methods of the Advanced Fluid Dynamics Model

    International Nuclear Information System (INIS)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development

  3. Scattering of surface waves modelled by the integral equation method

    Science.gov (United States)

    Lu, Laiyu; Maupin, Valerie; Zeng, Rongsheng; Ding, Zhifeng

    2008-09-01

    The integral equation method is used to model the propagation of surface waves in 3-D structures. The wavefield is represented by the Fredholm integral equation, and the scattered surface waves are calculated by solving the integral equation numerically. The integration of the Green's function elements is given analytically by treating the singularity of the Hankel function at R = 0, based on the proper expression of the Green's function and the addition theorem of the Hankel function. No far-field and Born approximation is made. We investigate the scattering of surface waves propagating in layered reference models imbedding a heterogeneity with different density, as well as Lamé constant contrasts, both in frequency and time domains, for incident plane waves and point sources.

  4. A model for explaining fusion suppression using classical trajectory method

    Directory of Open Access Journals (Sweden)

    Phookan C. K.

    2015-01-01

    Full Text Available We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.

  5. Modeling of Cracked Beams by the Experimental Design Method

    Directory of Open Access Journals (Sweden)

    M. Serier

    Full Text Available Abstract The understanding of phenomena, no matter their nature is based on the experimental results found. In the most cases, this requires an important number of tests in order to put a reliable and useful observation served into solving the technical problems subsequently. This paper is based on independent and variables combination resulting from experimentation in a mathematical formulation. Indeed, mathematical modeling gives us the advantage to optimize and predict the right choices without passing each case by the experiment. In this work we plan to apply the experimental design method on the experimental results found by (Deokar, A, 2011, concerning the effect of the size and position of a crack on the measured frequency of a beam console, and validating the mathematical model to predict other frequencies

  6. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A mathematical model and numerical method for thermoelectric DNA sequencing

    Science.gov (United States)

    Shi, Liwei; Guilbeau, Eric J.; Nestorova, Gergana; Dai, Weizhong

    2014-05-01

    Single nucleotide polymorphisms (SNPs) are single base pair variations within the genome that are important indicators of genetic predisposition towards specific diseases. This study explores the feasibility of SNP detection using a thermoelectric sequencing method that measures the heat released when DNA polymerase inserts a deoxyribonucleoside triphosphate into a DNA strand. We propose a three-dimensional mathematical model that governs the DNA sequencing device with a reaction zone that contains DNA template/primer complex immobilized to the surface of the lower channel wall. The model is then solved numerically. Concentrations of reactants and the temperature distribution are obtained. Results indicate that when the nucleoside is complementary to the next base in the DNA template, polymerization occurs lengthening the complementary polymer and releasing thermal energy with a measurable temperature change, implying that the thermoelectric conceptual device for sequencing DNA may be feasible for identifying specific genes in individuals.

  8. Modeling of electromigration salt removal methods in building materials

    DEFF Research Database (Denmark)

    Johannesson, Björn; Ottosen, Lisbeth M.

    2008-01-01

    and the effect of the composition of the ionic constituents on the overall behavior of the salt removal process. The model is obtained by assigning a Fick’s law type of assumption for each ionic species considered and also assuming that all ions is effected by the applied external electrical field in accordance...... with its ionic mobility properties. It is, further, assumed that Gauss’s law can be used to calculate the internal electrical field induced by the diffusion it self. In this manner the external electrical field applied can be modeled, simply, by assigning proper boundary conditions for the equation...... calculating the electrical field. A tailor made finite element code is written capable of solving the transient non-linear coupled set of differential equations numerically. A truly implicit time integration scheme is used together with a modified Newton-Raphson method to tackle the non...

  9. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  10. The Effectiveness of Hard Martial Arts in People over Forty: An Attempted Systematic Review

    Directory of Open Access Journals (Sweden)

    Gaby Pons van Dijk

    2014-04-01

    Full Text Available The objective was to assess the effect of hard martial arts on the physical fitness components such as balance, flexibility, gait, strength, cardiorespiratory function and several mental functions in people over forty. A computerized literature search was carried out. Studies were selected when they had an experimental design, the age of the study population was >40, one of the interventions was a hard martial art, and when at least balance and cardiorespiratory functions were used as an outcome measure. We included four studies, with, in total, 112 participants, aged between 51 and 93 years. The intervention consisted of Taekwondo or Karate. Total training duration varied from 17 to 234 h. All four studies reported beneficial effects, such as improvement in balance, in reaction tests, and in duration of single leg stance. We conclude that because of serious methodological shortcomings in all four studies, currently there is suggestive, but insufficient evidence, that hard martial arts practice improves physical fitness functions in healthy people over 40. However, considering the importance of such effects, and the low costs of the intervention, the potential of beneficial health effects of age-adapted, hard martial arts training, in people over 40, warrants further study.

  11. Cognition improvement in Taekwondo novices over forty. Results from the SEKWONDO Study.

    Directory of Open Access Journals (Sweden)

    Gaby ePons Van Dijk

    2013-11-01

    Full Text Available AbstractAge-related cognitive decline is associated with increased risk of disability, dementia and death. Recent studies suggest improvement in cognitive speed, attention and executive functioning with physical activity. However, whether such improvements are activity specific is unclear.Therefore, we aimed to study the effect of one year age-adapted Taekwondo training on several cognitive functions, including reaction/ motor time, information processing speed, and working and executive memory, in 24 healthy volunteers over forty.Reaction and motor time decreased with 41.2 seconds and 18.4 seconds (p=0.004, p=0.015, respectively. Digit symbol coding task improved with a mean of 3.7 digits (p=0.017. Digit span, letter fluency, and trail making test task-completion-time all improved, but not statistically significant. The questionnaire reported better reaction time in 10 and unchanged in 9 of the nineteen study compliers. In conclusion, our data suggest that age-adapted Taekwondo training improves various aspects of cognitive function in people over 40, which may, therefore, offer a cheap, safe and enjoyable way to mitigate age-related cognitive decline.

  12. Forty years increase of the air ambient temperature in Greece: The impact on buildings

    International Nuclear Information System (INIS)

    Kapsomenakis, J.; Kolokotsa, D.; Nikolaou, T.; Santamouris, M.; Zerefos, S.C.

    2013-01-01

    Highlights: • Forty years hourly data series from nine meteorological stations in Greece are analysed. • The air temperature increase influences the buildings’ energy demand. • A typical office building’s energy demand is examined. • The heating load is decreased by about 1 kWh/m 2 per decade. • The cooling load is increased by about 5 kWh/m 2 per decade. - Abstract: Air temperatures in urban areas continue to increase because of the heat island phenomenon (UHI) and the undeniable warming of the lower atmosphere during the past few decades. The observed high ambient air temperatures intensify the energy demand in cities, deteriorate urban comfort conditions, endanger the vulnerable population and amplify pollution problems especially in regions with hot climatic conditions. The present paper analyses 40 years of hourly data series from nine meteorological stations in Greece in order to understand the impact of air temperature and relative humidity trends on the energy consumption of buildings. Using a typical office building, the analysis showed that for the period in question the heating load in the Greek building sector has decreased by about 1 kWh/m 2 per decade, while the cooling load increased by about 5 kWh/m 2 per decade. This phenomenon has major environmental, economic and social consequences, which will be amplified in the upcoming decades in view of the expected man-made climatic changes in this geographic area

  13. Sixty Days Remaining, Forty Years of CERN, Two Brothers, One Exclusive Interview

    CERN Multimedia

    2001-01-01

    Twins Marcel and Daniel Genolin while sharing memories of their CERN experiences, point out just how much smaller the Meyrin site once was. In a place such as CERN where the physical sciences are in many ways the essence of our daily lives and where technological advancement is an everyday occurrence, it is easy to lose track of the days, months, and even years. But last week twin brothers, Daniel and Marcel Genolin, hired in the early sixties and getting ready to end their eventful forty year CERN experiences, made it clear that the winds of time bluster past us whether we are aware or not. 'CERN was very small when we started' says Marcel, who has worked in transport during his entire time here. A lot has changed. 'When I got here there were no phones in peoples' houses' he recalls,'when there were problems in the control room with the PS (Proton Synchrotron) they used to get a megaphone and tell us {the transport service} to go and get the necessary physicists from their homes in the area. We had to lo...

  14. Triterpenic content and chemometric analysis of virgin olive oils from forty olive cultivars.

    Science.gov (United States)

    Allouche, Yosra; Jiménez, Antonio; Uceda, Marino; Aguilera, M Paz; Gaforio, José Juan; Beltrán, Gabriel

    2009-05-13

    Forty olive cultivars (Olea europaea, L.) from the World Olive Germoplasm Bank Collection of Cordoba (Spain) were studied for their oil triterpenic dialcohol (uvaol and erythrodiol) and acid (oleanolic, ursolic, maslinic) composition. Dialcohol content ranged from 8.15 to 85.05 mg/kg, erythrodiol being the most predominant (from 5.89 to 73.78 mg/kg), whereas uvaol content was found at lower levels (from 1.50 to 19.35 mg/kg). Triterpenic acid concentration oscillated between 8.90 to 112.36 mg/kg. Among them, ursolic acid was found at trace levels, while the mean values of oleanolic and maslinic acids ranged from 3.39 to 78.83 mg/kg and 3.93 to 49.81 mg/kg, respectively. The variability observed for both triterpenic dialcohols and acid content was emphasized by principal component and cluster analyses. Both analyses were able to discriminate between oil samples, especially by erythrodiol, oleanolic acid, and maslinic acids. Regarding these results, we conclude that the virgin olive oil triterpenic fraction can be considered as a useful tool to characterize monovarietal virgin olive oil.

  15. Parathyroid autotransplantation in forty-four patients with primary hyperparathyroidism: the role of thallium scanning

    International Nuclear Information System (INIS)

    McCall, A.R.; Calandra, D.; Lawrence, A.M.; Henkin, R.; Paloyan, E.

    1986-01-01

    Forty-four patients with primary hyperparathyroidism were followed for 18 to 126 months after subtotal or total parathyroidectomy and parathyroid autotransplantation. Indications for autotransplantation included the devascularization of parathyroid glands during concomitant thyroid lobectomy or total thyroidectomy and the excision of the only remaining parathyroid tissue in patients with persistent hyperparathyroidism after previous unsuccessful parathyroidectomies. Before implantation, all parathyroid tissue was histologically evaluated by frozen-section light microscopy with hematoxylin and eosin stain. Fifteen patients had histologically normal implants; to date none of these patients have developed recurrent hyperparathyroidism. Twenty-nine patients had either adenomatous or hyperplastic parathyroid tissue used for implants; two of these patients developed graft-dependent recurrent hyperparathyroidism 4 and 7 years later. In both patients the grafts were preoperatively localized by thallium scanning and their resection restored eucalcemia. One hundred thirty-one patients from 11 series in the current literature had a cumulative incidence of 17.5% for presumed graft-dependent recurrence and a 9.2% incidence of graft excision followed by eucalcemia. In comparison, in the present series the incidence of graft-dependent recurrent hyperparathyroidism in patients with either adenomatous or hyperplastic implants stands at 6.9%. In contrast, in 15 patients with normal parathyroid tissue implants, the incidence was zero

  16. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  17. Engineering models and methods for industrial cell control

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk; Alting, Leo

    1997-01-01

    This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based....... Further, an engineering methodology is defined. The three elements enablers, architecture and methodology constitutes the Cell Control Engineering concept which has been defined and evaluated through the implementation of two cell control systems for robot welding cells in production at ODENSE STEEL...

  18. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  19. A Probabilistic Recommendation Method Inspired by Latent Dirichlet Allocation Model

    Directory of Open Access Journals (Sweden)

    WenBo Xie

    2014-01-01

    Full Text Available The recent decade has witnessed an increasing popularity of recommendation systems, which help users acquire relevant knowledge, commodities, and services from an overwhelming information ocean on the Internet. Latent Dirichlet Allocation (LDA, originally presented as a graphical model for text topic discovery, now has found its application in many other disciplines. In this paper, we propose an LDA-inspired probabilistic recommendation method by taking the user-item collecting behavior as a two-step process: every user first becomes a member of one latent user-group at a certain probability and each user-group will then collect various items with different probabilities. Gibbs sampling is employed to approximate all the probabilities in the two-step process. The experiment results on three real-world data sets MovieLens, Netflix, and Last.fm show that our method exhibits a competitive performance on precision, coverage, and diversity in comparison with the other four typical recommendation methods. Moreover, we present an approximate strategy to reduce the computing complexity of our method with a slight degradation of the performance.

  20. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  1. A method for increase abrasive wear resistance parts by obtaining on methods casting on gasifying models

    Science.gov (United States)

    Sedukhin, V. V.; Anikeev, A. N.; Chumanov, I. V.

    2017-11-01

    Method optimizes hardening working layer parts’, working in high-abrasive conditions looks in this work: bland refractory particles WC and TiC in respect of 70/30 wt. % prepared by beforehand is applied on polystyrene model in casting’ mould. After metal poured in mould, withstand for crystallization, and then a study is carried out. Study macro- and microstructure received samples allows to say that thickness and structure received hardened layer depends on duration interactions blend harder carbides and liquid metal. Different character interactions various dispersed particles and matrix metal observed under the same conditions. Tests abrasive wear resistance received materials of method calculating residual masses was conducted in laboratory’ conditions. Results research wear resistance showed about that method obtaining harder coating of blend carbide tungsten and carbide titanium by means of drawing on surface foam polystyrene model before moulding, allows receive details with surface has wear resistance in 2.5 times higher, than details of analogy steel uncoated. Wherein energy costs necessary for transformation units mass’ substances in powder at obtained harder layer in 2.06 times higher, than materials uncoated.

  2. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  3. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  4. Three dimensional wavefield modeling using the pseudospectral method; Pseudospectral ho ni yoru sanjigen hadoba modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T.; Matsuoka, T. [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T. [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.

  5. Multinomial Response Models, for Modeling and Determining Important Factors in Different Contraceptive Methods in Women

    Directory of Open Access Journals (Sweden)

    E Haji Nejad

    2001-06-01

    Full Text Available Difference aspects of multinomial statistical modelings and its classifications has been studied so far. In these type of problems Y is the qualitative random variable with T possible states which are considered as classifications. The goal is prediction of Y based on a random Vector X ? IR^m. Many methods for analyzing these problems were considered. One of the modern and general method of classification is Classification and Regression Trees (CART. Another method is recursive partitioning techniques which has a strange relationship with nonparametric regression. Classical discriminant analysis is a standard method for analyzing these type of data. Flexible discriminant analysis method which is a combination of nonparametric regression and discriminant analysis and classification using spline that includes least square regression and additive cubic splines. Neural network is an advanced statistical method for analyzing these types of data. In this paper properties of multinomial logistics regression were investigated and this method was used for modeling effective factors in selecting contraceptive methods in Ghom province for married women age 15-49. The response variable has a tetranomial distibution. The levels of this variable are: nothing, pills, traditional and a collection of other contraceptive methods. A collection of significant independent variables were: place, age of women, education, history of pregnancy and family size. Menstruation age and age at marriage were not statistically significant.

  6. A copula method for modeling directional dependence of genes

    Directory of Open Access Journals (Sweden)

    Park Changyi

    2008-05-01

    Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared

  7. A new CFD modeling method for flow blockage accident investigations

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Wenyuan, E-mail: fanwy@mail.ustc.edu.cn; Peng, Changhong, E-mail: pengch@ustc.edu.cn; Chen, Yangli, E-mail: chenyl@mail.ustc.edu.cn; Guo, Yun, E-mail: guoyun79@ustc.edu.cn

    2016-07-15

    Highlights: • Porous-jump treatment is applied to CFD simulation on flow blockages. • Porous-jump treatment predicts consistent results with direct CFD treatment. • Relap5 predicts abnormal flow rate profiles in MTR SFA blockage scenario. • Relap5 fails to simulate annular heat flux in blockage case of annular assembly. • Porous-jump treatment provides reasonable and generalized CFD results. - Abstract: Inlet flow blockages in both flat and annular plate-type fuel assemblies are simulated by (Computational Fluid Dynamics) CFD and system analysis methods, with blockage ratio ranging from 60 to 90%. For all the blockage scenarios, mass flow rate of the blocked channel drops dramatically as blockage ratio increases, while mass flow rates of non-blocked channels are almost steady. As a result of over-simplifications, the system code fails to capture details of mass flow rate profiles of non-blocked channels and power redistribution of fuel plates. In order to acquire generalized CFD results, a new blockage modeling method is developed by using the porous-jump condition. For comparisons, direct CFD simulations are conducted toward postulated blockages. For the porous-jump treatment, conservative flow and heat transfer conditions are predicted for the blocked channel, while consistent predictions are obtained for non-blocked channels. Besides, flow fields in the blocked channel, asymmetric power redistributions of fuel plates, and complex heat transfer phenomena in annular fuel assembly are obtained and discussed. The present study indicates that the porous-jump condition is a reasonable blockage modeling method, which predicts generalized CFD results for flow blockages.

  8. Space Environment Modelling with the Use of Artificial Intelligence Methods

    Science.gov (United States)

    Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.

    1996-12-01

    Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore

  9. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  10. Semisupervised learning of hidden Markov models via a homotopy method.

    Science.gov (United States)

    Ji, Shihao; Watson, Layne T; Carin, Lawrence

    2009-02-01

    Hidden Markov model (HMM) classifier design is considered for the analysis of sequential data, incorporating both labeled and unlabeled data for training; the balance between the use of labeled and unlabeled data is controlled by an allocation parameter \\lambda \\in [0, 1), where \\lambda = 0 corresponds to purely supervised HMM learning (based only on the labeled data) and \\lambda = 1 corresponds to unsupervised HMM-based clustering (based only on the unlabeled data). The associated estimation problem can typically be reduced to solving a set of fixed-point equations in the form of a "natural-parameter homotopy." This paper applies a homotopy method to track a continuous path of solutions, starting from a local supervised solution (\\lambda = 0) to a local unsupervised solution (\\lambda = 1). The homotopy method is guaranteed to track with probability one from \\lambda = 0 to \\lambda = 1 if the \\lambda = 0 solution is unique; this condition is not satisfied for the HMM since the maximum likelihood supervised solution (\\lambda = 0) is characterized by many local optima. A modified form of the homotopy map for HMMs assures a track from \\lambda = 0 to \\lambda = 1. Following this track leads to a formulation for selecting \\lambda \\in [0, 1) for a semisupervised solution and it also provides a tool for selection from among multiple local-optimal supervised solutions. The results of applying the proposed method to measured and synthetic sequential data verify its robustness and feasibility compared to the conventional EM approach for semisupervised HMM training.

  11. Extrathymic malignancies associated with thymoma: a forty-year experience at a single institution.

    Science.gov (United States)

    Kamata, Toshiko; Yoshida, Shigetoshi; Wada, Hironobu; Fujiwara, Taiki; Suzuki, Hidemi; Nakajima, Takahiro; Iwata, Takekazu; Nakatani, Yukio; Yoshino, Ichiro

    2017-04-01

    Patients with thymoma are reported to have an increased risk for developing second malignancies. The aim of this study was to assess the incidence of second malignancies among patients with thymoma. We focused especially on the impact that lung cancer has on survival in these patients. Three hundred and thirty-five patients who underwent surgery for thymoma in Chiba University Hospital from January 1971 to November 2012 were included in this study. Patient records were reviewed retrospectively for data on background, treatment, second malignancies and clinical outcome. Fourteen patients had a history of malignancy until the time of operation, with an additional 20 diagnosed simultaneously with the thymoma. Forty-three malignant lesions in 33 patients were found post-thymectomy. Lung cancer was diagnosed in 17 patients, far exceeding the expected number in the cohort, which was calculated according to Japanese national data. The median survival time of the thymoma patients who had lung cancer at the time of surgery was 5.8 years. The survival of patients with thymoma and lung cancer was poor in comparison with that of others. Secondary lung cancer is frequently found in thymoma patients and could be one of the factors limiting survival. We recommend an annual computed tomographic scan of the thorax to detect not only recurrent thymoma but also lung cancer at an early stage in order to improve the survival of these patients. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  12. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  13. Theoretical Modelling Methods for Thermal Management of Batteries

    Directory of Open Access Journals (Sweden)

    Bahman Shabani

    2015-09-01

    Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.

  14. Forty Years of Forensic Interviewing of Children Suspected of Sexual Abuse, 1974–2014: Historical Benchmarks

    Directory of Open Access Journals (Sweden)

    Kathleen Coulborn Faller

    2014-12-01

    Full Text Available This article describes the evolution of forensic interviewing as a method to determine whether or not a child has been sexually abused, focusing primarily on the United States. It notes that forensic interviewing practices are challenged to successfully identify children who have been sexually abused and successfully exclude children who have not been sexually abused. It describes models for child sexual abuse investigation, early writings and practices related to child interviews, and the development of forensic interview structures from scripted, to semi-structured, to flexible. The article discusses the controversies related appropriate questions and the use of media (e.g., anatomical dolls and drawings. It summarizes the characteristics of four important interview structures and describes their impact of the field of forensic interviewing. The article describes forensic interview training and the challenge of implementing training in forensic practice. The article concludes with a summary of progress and remaining controversies and with future challenges for the field of forensic interviewing.

  15. The BDS Triple Frequency Pseudo-range Correlated Stochastic Model of Single Station Modeling Method

    Directory of Open Access Journals (Sweden)

    HUANG Lingyong

    2017-05-01

    Full Text Available In order to provide a reliable pseudo-range stochastic model, a method is studied to estimate the BDS triple-frequency pseudo-range related stochastic model based on three BDS triple-frequency pseudo-range minus carrier (GIF combinations using the data of a single station. In this algorithm, the low order polynomial fitting method is used to fit the GIF combination in order to eliminate the error and other constants except non pseudo noise at first. And then, multiple linear regression analysis method is used to model the stochastic function of three linearly independent GIF combinations. Finally the related stochastic model of the original BDS triple-frequency pseudo-range observations is obtained by linear transformation. The BDS triple-frequency data verification results show that this algorithm can get a single station related stochastic model of BDS triple-frequency pseudo-range observation, and it is advantageous to provide accurate stochastic model for navigation and positioning and integrity monitoring.

  16. Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H.; Obuchi, T.; Saito, T. [Iwate University, Iwate (Japan). Faculty of Engineering

    1996-05-01

    Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.

  17. ON MODELING METHODS OF REPRODUCTION OF FIXED ASSETS IN DYNAMIC INPUT - OUTPUT MODELS

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2014-12-01

    Full Text Available The article presents a comparative study of methods for modeling reproduction of fixed assets in various types of dynamic input-output models, which have been developed at the Novosibirsk State University and at the Institute of Economics and Industrial Engineering of the Siberian Division of Russian Academy of Sciences. The study compares the technique of information providing for the investment blocks of the models. Considered in detail mathematical description of the block of fixed assets reproduction in the Dynamic Input - Output Model included in the KAMIN system and the optimization interregional input - output model. Analyzes the peculiarities of information support of investment and fixed assets blocks of the Dynamic Input - Output Model included in the KAMIN system and the optimization interregional input - output model. In conclusion of the article provides suggestions for joint use of the analyzed models for Russian economy development forecasting. Provided the use of the KAMIN system’s models for short-term and middle-term forecasting and the optimization interregional input - output model to develop long-term forecasts based on the spatial structure of the economy.

  18. Differential growth forms of the sponge Biemna fortis govern the abundance of its associated brittle star Ophiactis modesta

    Science.gov (United States)

    Dahihande, Azraj S.; Thakur, Narsinh L.

    2017-08-01

    Marine intertidal regions are physically stressful habitats. In such an environment, facilitator species and positive interactions mitigate unfavorable conditions to the benefit of less tolerant organisms. In sponge-brittle star association, sponges effectively shelter brittle stars from biotic and abiotic stresses. The sponge, Biemna fortis (Topsent, 1897) was examined from two intertidal regions Anjuna and Mhapan along the Central West Coast of India for associated brittle star Ophiactis modesta (Brock, 1888) during 2013-2014. The study sites varied in suspended particulate matter (SPM). B. fortis at the high SPM habitat (Anjuna) had partially buried growth form and at the low SPM habitat (Mhapan) had massive growth form. O. modesta was abundantly associated with the massive growth form (50-259 individuals per 500 ml sponge) but rarely occurred in association with partially buried growth form (6-16 individuals per 500 ml sponge). In laboratory choice assay O. modesta showed equal preference to the chemical cues from both the growth forms of B. fortis. In addition, O. modesta showed significant preference to B. fortis compared to other sympatric sponges. These observations highlight the involvement of chemical cues in host recognition by O. modesta. Massive growth forms transplanted to the high SPM habitat were unable to survive but partially buried growth forms transplanted to the low SPM habitat were able to survive. Differential growth forms of the host sponge B. fortis at different abiotic stresses affect the abundance of the associated brittle star O. modesta.

  19. Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method

    Science.gov (United States)

    Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.

    2003-01-01

    Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.

  20. Level set methods for modelling field evaporation in atom probe.

    Science.gov (United States)

    Haley, Daniel; Moody, Michael P; Smith, George D W

    2013-12-01

    Atom probe is a nanoscale technique for creating three-dimensional spatially and chemically resolved point datasets, primarily of metallic or semiconductor materials. While atom probe can achieve local high-level resolution, the spatial coherence of the technique is highly dependent upon the evaporative physics in the material and can often result in large geometric distortions in experimental results. The distortions originate from uncertainties in the projection function between the field evaporating specimen and the ion detector. Here we explore the possibility of continuum numerical approximations to the evaporative behavior during an atom probe experiment, and the subsequent propagation of ions to the detector, with particular emphasis placed on the solution of axisymmetric systems, such as isolated particles and multilayer systems. Ultimately, this method may prove critical in rapid modeling of tip shape evolution in atom probe tomography, which itself is a key factor in the rapid generation of spatially accurate reconstructions in atom probe datasets.

  1. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... operation constraints, while the third approach uses nonlinear programming. In the present case the non-linearity occurs in the boiler efficiency of power plants and the cv-value of an extraction plant. The linear programming model is used as a benchmark, as this type is frequently used, and has the lowest...

  2. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  3. PREFACE: Theory, Modelling and Computational methods for Semiconductors

    Science.gov (United States)

    Migliorato, Max; Probert, Matt

    2010-04-01

    These conference proceedings contain the written papers of the contributions presented at the 2nd International Conference on: Theory, Modelling and Computational methods for Semiconductors. The conference was held at the St Williams College, York, UK on 13th-15th Jan 2010. The previous conference in this series took place in 2008 at the University of Manchester, UK. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in Semiconductor science and technology, where there is a substantial potential for time saving in R&D. The development of high speed computer architectures is finally allowing the routine use of accurate methods for calculating the structural, thermodynamic, vibrational and electronic properties of semiconductors and their heterostructures. This workshop ran for three days, with the objective of bringing together UK and international leading experts in the field of theory of group IV, III-V and II-VI semiconductors together with postdocs and students in the early stages of their careers. The first day focused on providing an introduction and overview of this vast field, aimed particularly at students at this influential point in their careers. We would like to thank all participants for their contribution to the conference programme and these proceedings. We would also like to acknowledge the financial support from the Institute of Physics (Computational Physics group and Semiconductor Physics group), the UK Car-Parrinello Consortium, Accelrys (distributors of Materials Studio) and Quantumwise (distributors of Atomistix). The Editors Acknowledgements Conference Organising Committee: Dr Matt Probert (University of York) and Dr Max Migliorato (University of Manchester) Programme Committee: Dr Marco Califano (University of Leeds), Dr Jacob Gavartin (Accelrys Ltd, Cambridge), Dr Stanko Tomic (STFC Daresbury Laboratory), Dr Gabi Slavcheva (Imperial College London) Proceedings edited and compiled by Dr

  4. Methods of Modelling Marketing Activity on Software Sales

    Directory of Open Access Journals (Sweden)

    Bashirov Islam H.

    2013-11-01

    Full Text Available The article studies a topical issue of development of methods of modelling marketing activity on software sales for achievement of efficient functioning of an enterprise. On the basis of analysis of the market type for the studied CloudLinux OS product, the article identifies the market structure type: monopolistic competition. To ensure the information basis of the marketing activity in the target market segment, the article offers the survey method. The article provides a questionnaire, which contains specific questions regarding the studied market segment of hosting services, for an online survey with the help of the Survio service. In accordance with the system approach the CloudLinux OS has properties of systems, namely, diversity. Economic differences are non-price indicators that have no numeric expression and are quality descriptions. Analysis of the market and the conducted survey allow obtaining them. Combination of price and non-price indicators provides a complete description of the product properties. To calculate an integral indicator of competitiveness the article offers to apply a model, which is based on the direct algebraic addition of weight measures of individual indicators, regulation of formalised indicators and use of the mechanism of fuzzy sets for identification of non-formalised indicators. The calculated indicator allows not only assessment of the current level of competitiveness, but also identification of influence of changes of various indicators, which allows increase of efficiency of marketing decisions. Also, having identified the target customers of hosting OS and formalised non-price parameters, it is possible to conduct the search for a set of optimal characteristics of the product. In the result an optimal strategy of the product advancement to the market is formed.

  5. Three-Component Forward Modeling for Transient Electromagnetic Method

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2010-01-01

    Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.

  6. Solving Cocoa Pod Sigmoid Growth Model with Newton Raphson Method

    Science.gov (United States)

    Chang, Albert Ling Sheng; Maisin, Navies

    Cocoa pod growth modelling are useful in crop management, pest and disease management and yield forecasting. Recently, the Beta Growth Function has been used to determine the pod growth model due to its unique for the plant organ growth which is zero growth rate at both the start and end of a precisely defined growth period. Specific pod size (7cm to 10cm in length) is useful in cocoa pod borer (CPB) management for pod sleeving or pesticide spraying. The Beta Growth Function is well-fitted to the pods growth data of four different cocoa clones under non-linear function with time (t) as its independent variable which measured pod length and diameter weekly started at 8 weeks after fertilization occur until pods ripen. However, the same pod length among the clones did not indicate the same pod age since the morphological characteristics for cocoa pods vary among the clones. Depending on pod size for all the clones as guideline in CPB management did not give information on pod age, therefore it is important to study the pod age at specific pod sizes on different clones. Hence, Newton Raphson method is used to solve the non-linear equation of the Beta Growth Function of four different group of cocoa pod at specific pod size.

  7. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  8. Modelling methods in stress analysis of pipe coupling clamps

    International Nuclear Information System (INIS)

    Dutta, B.K.; Kushwaha, H.S.; Kakodkar, A.

    1987-01-01

    Pipe coupling using clamps are becoming more and more popular components in nuclear power plants and are used in place of conventional flanges because of their compactness, easy maintenance and more reliability. They are used in large numbers in Pressurised Heavy Water Reactors (PHWR) such as at the joint between feeder pipe and end-fitting, in F/M housing etc. Integrity of these clamps have direct effect on overall safety of the nuclear power plants. This necessitates proper design, fabrication, installation and maintenance of these components. Proper design of these clamps is a challenge to the designer. This is because of changing boundary conditions at the interface with the hub during various stages of loading. A detail stress analysis of clamps considering changing boundary conditions under various loadings can be done using finite element technique. In the following sections, two finite element modelling methods to simulate clamps along with hubs are described. Both these methods assumed absence of friction between the clamp and hubs during bolting, whereas absence of relative movement between them was assumed during other stages of loadings. (orig.)

  9. USA: OSTI Joins In Celebrating the Forty-Fifth Anniversary of INIS

    International Nuclear Information System (INIS)

    Cutler, Debbie

    2015-01-01

    Forty-five years ago, nations around the world saw their dream for a more efficient way to share nuclear-related information reach fruition through the creation of a formal international collaboration. This was accomplished without the Internet, email, or websites. It was the right thing to do for public safety, education, and the further advancement of science. It was also a necessary way forward as the volume of research and information about nuclear-related science, even back then, was skyrocketing and exceeded the capacity for any one country to go it alone. And the Department of Energy (DOE) Office of Scientific and Technical Information (OSTI) was part of the collaboration from its initial planning stages. The International Nuclear Information System, or INIS, as it is commonly known, was approved by the Governing Board of the United Nations’ International Atomic Energy Agency (IAEA) in 1969 and began operations in 1970. The primary purpose of INIS was, and still is, to collect and share information about the peaceful uses of nuclear science and technology, with participating nations sharing efforts to build a centralized resource. OSTI grew out of the United States’ post-World War II initiative to make the scientific research of the Manhattan Project as freely available to the public as possible. Thus, OSTI had been building the premier Nuclear Science Abstracts (NSA) publication since the late 1940s and was perfectly positioned to provide information gathering and organizing expertise to help the INIS concept coalesce into reality. OSTI was a key player in formative working group discussions at the IAEA in Vienna, Austria in the 1966-67 timeframe, and led many of the subsequent discussions and teams that finalized INIS policy guidance, common exchange formats, and more. To this day, OSTI has continued to represent the U.S. as the official INIS Liaison Officer (ILO) organization, contributing database content, helping disseminate INIS content more widely

  10. A novel method to establish a rat ED model using internal iliac artery ligation combined with hyperlipidemia.

    Directory of Open Access Journals (Sweden)

    Chao Hu

    Full Text Available OBJECTIVE: To investigate a novel method, namely using bilateral internal iliac artery ligation combined with a high-fat diet (BCH, for establishing a rat model of erectile dysfunction (ED that, compared to classical approaches, more closely mimics the chronic pathophysiology of human ED after acute ischemic insult. MATERIALS AND METHODS: Forty 4-month-old male Sprague Dawley rats were randomly placed into five groups (n = 8 per group: normal control (NC, bilateral internal iliac artery ligation (BIIAL, high-fat diet (HFD, BCH, and mock surgery (MS. All rats were induced for 12 weeks. Copulatory behavior, intracavernosal pressure (ICP, ICP/mean arterial pressure, hematoxylin-eosin staining, Masson's trichrome staining, serum lipid levels, and endothelial and neuronal nitric oxide synthase immunohistochemical staining of the cavernous smooth muscle and endothelium were assessed. Data were analyzed by SAS 8.0 for Windows. RESULTS: Serum total cholesterol and triglyceride levels were significantly higher in the HFD and BCH groups than the NC and MS groups. High density lipoprotein levels were significantly lower in the HFD and BCH groups than the NC and MS groups. The ICP values and mount and intromission numbers were significantly lower in the BIIAL, HFD, and BCH groups than in the NC and MS groups. ICP was significantly lower in the BCH group than in the BIIAL and HFD groups. Cavernous smooth muscle and endothelial damage increased in the HFD and BCH groups. Cavernous smooth muscle to collagen ratio, nNOS and eNOS staining decreased significantly in the BIIAL, HFD, and BCH groups compared to the NC and MS groups. CONCLUSIONS: The novel BCH model mimics the chronic pathophysiology of ED in humans and avoids the drawbacks of traditional ED models.

  11. Nonperturbative stochastic method for driven spin-boson model

    Science.gov (United States)

    Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn

    2013-01-01

    We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.

  12. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  13. Electromagnetic sunscreen model: implementation and comparison between several methods: step-film model, differential method, Mie scattering, and scattering by a set of parallel cylinders.

    Science.gov (United States)

    Lécureux, Marie; Enoch, Stefan; Deumié, Carole; Tayeb, Gérard

    2014-10-01

    Sunscreens protect from UV radiation, a carcinogen also responsible for sunburns and age-associated dryness. In order to anticipate the transmission of light through UV protection containing scattering particles, we implement electromagnetic models, using numerical methods for solving Maxwell's equations. After having our models validated, we compare several calculation methods: differential method, scattering by a set of parallel cylinders, or Mie scattering. The field of application and benefits of each method are studied and examples using the appropriate method are described.

  14. Equal Pay as a Moving Target: International perspectives on forty-years of addressing the gender pay gap

    OpenAIRE

    Jacqueline O’Reilly; Mark Smith; Simon Deakin; Brendan Burchell

    2015-01-01

    This paper provides an overview of the key factors impacting upon the gender pay gap in the UK, Europe and Australia. Forty years after the implementation of the first equal pay legislation, the pay gap remains a key aspect of the inequalities women face in the labour market. While the overall pay gap has tended to fall in many countries over the past forty years, it has not closed; in some countries it has been stubbornly resistant, or has even widened. In reviewing the collection of papers ...

  15. Spinal posture and pelvic position in three hundred forty-five elementary school children: a rasterstereographic pilot study

    Directory of Open Access Journals (Sweden)

    Thimm Christoph Furian

    2013-03-01

    Full Text Available Children’s posture has been of growing concern due to observations that it seems to be impaired compared to previous generations. So far there is no reference data for spinal posture and pelvic position in healthy children available. Purpose of this pilot study was to determine rasterstereographic posture values in children during their second growth phase. Three hundred and forty-five pupils were measured with a rasterstereographic device in a neutral standing position with hanging arms. To further analyse for changes in spinal posture during growth, the children were divided into 12-month age clusters. A mean kyphotic angle of 47.1°±7.5 and a mean lordotic angle of 42.1°±9.9 were measured. Trunk imbalance in girls (5.85 mm±0.74 and boys (7.48 mm± 0.83 varied only little between the age groups, with boys showing slightly higher values than girls. The trunk inclination did not show any significant differences between the age groups in boys or girls. Girls’ inclination was 2.53°±1.96 with a tendency to decreasing angles by age, therefore slightly smaller compared to boys (2.98°±2.18. Lateral deviation (4.8 mm and pelvic position (tilt: 2.75 mm; torsion: 1.53°; inclination: 19.8°±19.8 were comparable for all age groups and genders. This study provides the first systematic rasterstereographic analysis of spinal posture in children between 6 and 11 years. With the method of rasterstereography a reliable three-dimensional analysis of spinal posture and pelvic position is possible. Spinal posture and pelvic position does not change significantly with increasing age in this collective of children during the second growth phase.

  16. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    Science.gov (United States)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  17. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  18. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

    Science.gov (United States)

    Saini, P. Sri; Prince, Shanthi

    2011-10-01

    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  19. Data Mining Methods to Generate Severe Wind Gust Models

    Directory of Open Access Journals (Sweden)

    Subana Shanmuganathan

    2014-01-01

    Full Text Available Gaining knowledge on weather patterns, trends and the influence of their extremes on various crop production yields and quality continues to be a quest by scientists, agriculturists, and managers. Precise and timely information aids decision-making, which is widely accepted as intrinsically necessary for increased production and improved quality. Studies in this research domain, especially those related to data mining and interpretation are being carried out by the authors and their colleagues. Some of this work that relates to data definition, description, analysis, and modelling is described in this paper. This includes studies that have evaluated extreme dry/wet weather events against reported yield at different scales in general. They indicate the effects of weather extremes such as prolonged high temperatures, heavy rainfall, and severe wind gusts. Occurrences of these events are among the main weather extremes that impact on many crops worldwide. Wind gusts are difficult to anticipate due to their rapid manifestation and yet can have catastrophic effects on crops and buildings. This paper examines the use of data mining methods to reveal patterns in the weather conditions, such as time of the day, month of the year, wind direction, speed, and severity using a data set from a single location. Case study data is used to provide examples of how the methods used can elicit meaningful information and depict it in a fashion usable for management decision making. Historical weather data acquired between 2008 and 2012 has been used for this study from telemetry devices installed in a vineyard in the north of New Zealand. The results show that using data mining techniques and the local weather conditions, such as relative pressure, temperature, wind direction and speed recorded at irregular intervals, can produce new knowledge relating to wind gust patterns for vineyard management decision making.

  20. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  1. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  2. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  3. Detection of Internal Short Circuit in Lithium Ion Battery Using Model-Based Switching Model Method

    Directory of Open Access Journals (Sweden)

    Minhwan Seo

    2017-01-01

    Full Text Available Early detection of an internal short circuit (ISCr in a Li-ion battery can prevent it from undergoing thermal runaway, and thereby ensure battery safety. In this paper, a model-based switching model method (SMM is proposed to detect the ISCr in the Li-ion battery. The SMM updates the model of the Li-ion battery with ISCr to improve the accuracy of ISCr resistance R I S C f estimates. The open circuit voltage (OCV and the state of charge (SOC are estimated by applying the equivalent circuit model, and by using the recursive least squares algorithm and the relation between OCV and SOC. As a fault index, the R I S C f is estimated from the estimated OCVs and SOCs to detect the ISCr, and used to update the model; this process yields accurate estimates of OCV and R I S C f . Then the next R I S C f is estimated and used to update the model iteratively. Simulation data from a MATLAB/Simulink model and experimental data verify that this algorithm shows high accuracy of R I S C f estimates to detect the ISCr, thereby helping the battery management system to fulfill early detection of the ISCr.

  4. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    Since groundwater remediation is a time consuming and costly ... and sea water intrusion management problems. Hemker et al. .... Case study. 3.1 Site overview. To evaluate the advantages and disadvantages of different surrogate models of groundwater simula- tion model, three different surrogate models (PR model ...

  5. Neural node network and model, and method of teaching same

    Science.gov (United States)

    Parlos, Alexander G. (Inventor); Atiya, Amir F. (Inventor); Fernandez, Benito (Inventor); Tsai, Wei K. (Inventor); Chong, Kil T. (Inventor)

    1995-01-01

    The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.

  6. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  7. Modeling methods for high-fidelity rotorcraft flight mechanics simulation

    Science.gov (United States)

    Mansur, M. Hossein; Tischler, Mark B.; Chaimovich, Menahem; Rosen, Aviv; Rand, Omri

    1992-01-01

    The cooperative effort being carried out under the agreements of the United States-Israel Memorandum of Understanding is discussed. Two different models of the AH-64 Apache Helicopter, which may differ in their approach to modeling the main rotor, are presented. The first model, the Blade Element Model for the Apache (BEMAP), was developed at Ames Research Center, and is the only model of the Apache to employ a direct blade element approach to calculating the coupled flap-lag motion of the blades and the rotor force and moment. The second model was developed at the Technion-Israel Institute of Technology and uses an harmonic approach to analyze the rotor. The approach allows two different levels of approximation, ranging from the 'first harmonic' (similar to a tip-path-plane model) to 'complete high harmonics' (comparable to a blade element approach). The development of the two models is outlined and the two are compared using available flight test data.

  8. Pursuing the method of multiple working hypotheses for hydrological modeling

    NARCIS (Netherlands)

    Clark, M.P.; Kavetski, D.; Fenicia, F.

    2011-01-01

    Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding

  9. Methods for modeling chinese hamster ovary (cho) cell metabolism

    DEFF Research Database (Denmark)

    2015-01-01

    Embodiments of the present invention generally relate to the computational analysis and characterization biological networks at the cellular level in Chinese Hamster Ovary (CHO) cells. Based on computational methods utilizing a hamster reference genome, the invention provides methods...

  10. Parametric Anatomical Modeling: A method for modeling the anatomical layout of neurons and their projections

    Directory of Open Access Journals (Sweden)

    Martin ePyka

    2014-09-01

    Full Text Available Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM, to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: i the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, ii the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  11. Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections.

    Science.gov (United States)

    Pyka, Martin; Klatt, Sebastian; Cheng, Sen

    2014-01-01

    Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  12. Purpose of neuronal method for modeling of solar collector

    Energy Technology Data Exchange (ETDEWEB)

    Salah, Hanini; Moussa, Cherif Si [LBMPt, Universite Yahia Fares de Medea, Quartier Ain D' heb, 2600, Medea (Algeria); Hamid, Abdi [SEEs/MS, B.P. 478, Route de Reggane, Adrar (Algeria); Tariq, Omari [LBMPT, Universite Yahia Fares de Medea, Quartier Ain D' Heb, 2600, Medea (Algeria); SEES/MS, B.P. 478, Route de Reggane, Adrar (Algeria); Unite de developpement des equipments solaires, Bou-Ismail, Tipaza (Algeria)

    2012-07-01

    Artificial Neural Networks (ANN) are widely accepted as a technology offering an alternative way to tackle complex and ill-defined problems. They have been used in diverse applications and have shown to be particularly effective in system identification and modeling as they are fault tolerant and can learn from examples. On the other hand, ANN are able to deal with non-linear problems and once trained can perform prediction at high speed. The objective of this work is the characterization of the integrated collector-storage solar water heater (ICSSWH) by the determination of the day time thermal (and optical) properties, and Night time heat loss coefficient with experimental temperatures, and predictive temperatures by (ANN). Because of that, an ANN has been trained using data for three types of systems, all employing the same collector panel under varying weather conditions. In this way the network was trained to accept and handle a number of unusual cases. The data presented as input were, the working systems (day or night), the type of system, the year, the month, the day, the time, the ambient air temperature, and the solar radiation. The network output is the temperature of the four tanks of storage unit. The correlations coefficients (R2-value) obtained for the training data set was equal to 0.997, 0.998, 0.998, and 0.996 for the four temperatures of each tank. The results obtained in this work indicate that the proposed method can successfully be used for the characterization of the ICSSWH.

  13. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  14. Stochastic Approximation Methods for Latent Regression Item Response Models

    Science.gov (United States)

    von Davier, Matthias; Sinharay, Sandip

    2010-01-01

    This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…

  15. Review of forest landscape models: types, methods, development and applications

    Science.gov (United States)

    Weimin Xi; Robert N. Coulson; Andrew G. Birt; Zong-Bo Shang; John D. Waldron; Charles W. Lafon; David M. Cairns; Maria D. Tchakerian; Kier D. Klepzig

    2009-01-01

    Forest landscape models simulate forest change through time using spatially referenced data across a broad spatial scale (i.e. landscape scale) generally larger than a single forest stand. Spatial interactions between forest stands are a key component of such models. These models can incorporate other spatio-temporal processes such as...

  16. A New Method of Building Scale-Model Houses

    Science.gov (United States)

    Richard N. Malcolm

    1978-01-01

    Scale-model houses are used to display new architectural and construction designs.Some scale-model houses will not withstand the abuse of shipping and handling.This report describes how to build a solid-core model house which is rigid, lightweight, and sturdy.

  17. Forty Lines of Evidence for Condensed Matter — The Sun on Trial: Liquid Metallic Hydrogen as a Solar Building Block

    Directory of Open Access Journals (Sweden)

    Robitaille P.-M.

    2013-10-01

    Full Text Available Our Sun has confronted humanity with overwhelming evidence that it is comprised of condensed matter. Dismissing this reality, the standard solar models continue to be anchored on the gaseous plasma. In large measure, the endurance of these theories can be attributed to 1 the mathematical elegance of the equations for the gaseous state, 2 the apparent success of the mass-luminosity relationship, and 3 the long-lasting influence of leading proponents of these models. Unfortunately, no direct physical finding supports the notion that the solar body is gaseous. Without exception, all observations are most easily explained by recognizing that the Sun is primarily comprised of condensed matter. However, when a physical characteristic points to condensed matter, a postori arguments are invoked to account for the behavior using the gaseous state. In isolation, many of these treatments appear plausible. As a result, the gaseous models continue to be accepted. There seems to be an overarching belief in solar science that the problems with the gaseous models are few and inconsequential. In reality, they are numerous and, while often subtle, they are sometimes daunting. The gaseous equations of state have introduced far more dilemmas than they have solved. Many of the conclusions derived from these approaches are likely to have led solar physics down unproductive avenues, as deductions have been accepted which bear little or no relationship to the actual nature of the Sun. It could be argued that, for more than 100 years, the gaseous models have prevented mankind from making real progress relative to understanding the Sun and the universe. Hence, the Sun is now placed on trial. Forty lines of evidence will be presentedbthat the solar body is comprised of, and surrounded by, condensed matter. These ‘proofs’ can be divided into seven broad categories: 1 Planckian, 2 spectroscopic, 3 structural, 4 dynamic, 5 helioseismic, 6 elemental, and 7 earthly

  18. Modern methods in collisional-radiative modeling of plasmas

    CERN Document Server

    2016-01-01

    This book provides a compact yet comprehensive overview of recent developments in collisional-radiative (CR) modeling of laboratory and astrophysical plasmas. It describes advances across the entire field, from basic considerations of model completeness to validation and verification of CR models to calculation of plasma kinetic characteristics and spectra in diverse plasmas. Various approaches to CR modeling are presented, together with numerous examples of applications. A number of important topics, such as atomic models for CR modeling, atomic data and its availability and quality, radiation transport, non-Maxwellian effects on plasma emission, ionization potential lowering, and verification and validation of CR models, are thoroughly addressed. Strong emphasis is placed on the most recent developments in the field, such as XFEL spectroscopy. Written by leading international research scientists from a number of key laboratories, the book offers a timely summary of the most recent progress in this area. It ...

  19. Adapting Language Modeling Methods for Expert Search to Rank Wikipedia Entities

    Science.gov (United States)

    Jiang, Jiepu; Lu, Wei; Rong, Xianqian; Gao, Yangyan

    In this paper, we propose two methods to adapt language modeling methods for expert search to the INEX entity ranking task. In our experiments, we notice that language modeling methods for expert search, if directly applied to the INEX entity ranking task, cannot effectively distinguish entity types. Thus, our proposed methods aim at resolving this problem. First, we propose a method to take into account the INEX category query field. Second, we use an interpolation of two language models to rank entities, which can solely work on the text query. Our experiments indicate that both methods can effectively adapt language modeling methods for expert search to the INEX entity ranking task.

  20. From micro data to causality: Forty years of empirical labor economics

    NARCIS (Netherlands)

    van der Klaauw, B.

    2014-01-01

    This overview describes the development of methods for empirical research in the field of labor economics during the past four decades. This period is characterized by the use of micro data to answer policy relevant research question. Prominent in the literature is the search for exogenous variation

  1. Simplified method for numerical modeling of fiber lasers.

    Science.gov (United States)

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  2. Age replacement models: A summary with new perspectives and methods

    International Nuclear Information System (INIS)

    Zhao, Xufeng; Al-Khalifa, Khalifa N.; Magid Hamouda, Abdel; Nakagawa, Toshio

    2017-01-01

    Age replacement models are fundamental to maintenance theory. This paper summarizes our new perspectives and hods in age replacement models: First, we optimize the expected cost rate for a required availability level and vice versa. Second, an asymptotic model with simple calculation is proposed by using the cumulative hazard function skillfully. Third, we challenge the established theory such that preventive replacement should be non-random and only corrective replacement should be made for the unit with exponential failure. Fourth, three replacement policies with random working cycles are discussed, which are called overtime replacement, replacement first, and replacement last, respectively. Fifth, the policies of replacement first and last are formulated with general models. Sixth, age replacement is modified for the situation when the economical life cycle of the unit is a random variable with probability distribution. Finally, models of a parallel system with constant and random number of units are taken into considerations. The models of expected cost rates are obtained and optimal replacement times to minimize them are discussed analytically and computed numerically. Further studies and potential applications are also indicated at the end of discussions of the above models. - Highlights: • Optimization of cost rate for availability level is discussed and vice versa. • Asymptotic and random replacement models are discussed. • Overtime replacement, replacement first and replacement last are surveyed. • Replacement policy with random life cycle is given. • A parallel system with random number of units is modeled.

  3. Forty years trends in timing of pubertal growth spurt in 157,000 Danish school children

    DEFF Research Database (Denmark)

    Aksglæde, Lise; Olsen, Lina Wøhlk; Sørensen, Thorkild I.A.

    2008-01-01

    study was to determine if the age at onset of pubertal growth spurt (OGS) and at peak height velocity (PHV) during puberty show secular trends during four decades in a large cohort of school children. METHODS AND FINDINGS: Annual measurements of height were available in all children born from 1930...... to 1969 who attended primary school in the Copenhagen Municipality. 135,223 girls and 21,612 boys fulfilled the criteria for determining age at OGS and age at PHV. These physiological events were used as markers of pubertal development in our computerized method in order to evaluate any secular trends...... a secular trend towards earlier sexual maturation of Danish children born between 1930 and 1969. Only minor changes were observed in duration of puberty assessed by the difference in ages at OGS and PHV Udgivelsesdato: 2008...

  4. Forty years of increasing suicide mortality in Poland: Undercounting amidst a hanging epidemic?

    Directory of Open Access Journals (Sweden)

    Höfer Peter

    2012-08-01

    Full Text Available Abstract Background Suicide rate trends for Poland, one of the most populous countries in Europe, are not well documented. Moreover, the quality of the official Polish suicide statistics is unknown and requires in-depth investigation. Methods Population and mortality data disaggregated by sex, age, manner, and cause were obtained from the Polish Central Statistics Office for the period 1970-2009. Suicides and deaths categorized as ‘undetermined injury intent,’ ‘unknown causes,’ and ‘unintentional poisonings’ were analyzed to estimate the reliability and sensitivity of suicide certification in Poland over three periods covered by ICD-8, ICD-9 and ICD-10, respectively. Time trends were assessed by the Spearman test for trend. Results The official suicide rate increased by 51.3% in Poland between 1970 and 2009. There was an increasing excess suicide rate for males, culminating in a male-to-female ratio of 7:1. The dominant method, hanging, comprised 90% of all suicides by 2009. Factoring in deaths of undetermined intent only, estimated sensitivity of suicide certification was 77% overall, but lower for females than males. Not increasing linearly with age, the suicide rate peaked at ages 40-54 years. Conclusion The suicide rate is increasing in Poland, which calls for a national prevention initiative. Hangings are the predominant suicide method based on official registration. However, suicide among females appears grossly underestimated given their lower estimated sensitivity of suicide certification, greater use of “soft” suicide methods, and the very high 7:1 male-to-female rate ratio. Changes in the ICD classification system resulted in a temporary suicide data blackout in 1980-1982, and significant modifications of the death categories of senility and unknown causes, after 1997, suggest the need for data quality surveillance.

  5. Modeling and Analysis of Supplier Selection Method Using ...

    African Journals Online (AJOL)

    However, in these parts of the world the application of tools and models for supplier selection problem is yet to surface and the banking and finance industry here in Ethiopia is no exception. Thus, the purpose of this research was to address supplier selection problem through modeling and application of analytical hierarchy ...

  6. The Interval Market Model in Mathematical Finance : Game Theoretic Methods

    NARCIS (Netherlands)

    Bernhard, P.; Engwerda, J.C.; Roorda, B.; Schumacher, J.M.; Kolokoltsov, V.; Saint-Pierre, P.; Aubin, J.P.

    2013-01-01

    Toward the late 1990s, several research groups independently began developing new, related theories in mathematical finance. These theories did away with the standard stochastic geometric diffusion “Samuelson” market model (also known as the Black-Scholes model because it is used in that most famous

  7. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  8. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Science.gov (United States)

    David Lewis; Ralph. Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  9. Material characterization models and test methods for historic building materials

    DEFF Research Database (Denmark)

    Hansen, Tessa Kvist; Peuhkuri, Ruut Hannele; Møller, Eva B.

    2017-01-01

    ways for estimation of these. A case study of a brick wall was used to create and validate a hygrothermal simulation model; a parameter study with five different parameters was performed on this model to determine decisive parameters. Furthermore, a clustering technique has been proposed to estimate...

  10. Disruption Management in the Airline Industry - Concepts, Models and Methods

    DEFF Research Database (Denmark)

    Clausen, Jens; Larsen, Allan; Larsen, Jesper

    2005-01-01

    The airline industry is notably one of the success stories with respect to the use of optimization based methods and tools in planning. Both in planning of the assignment of available aircraft to flights and in crew scheduling, these methods play a major role. Plans are usually made several months...

  11. Application of Probability Methods to Assess Crash Modeling Uncertainty

    Science.gov (United States)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2007-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  12. Sparse QSAR modelling methods for therapeutic and regenerative medicine

    Science.gov (United States)

    Winkler, David A.

    2018-02-01

    The quantitative structure-activity relationships method was popularized by Hansch and Fujita over 50 years ago. The usefulness of the method for drug design and development has been shown in the intervening years. As it was developed initially to elucidate which molecular properties modulated the relative potency of putative agrochemicals, and at a time when computing resources were scarce, there is much scope for applying modern mathematical methods to improve the QSAR method and to extending the general concept to the discovery and optimization of bioactive molecules and materials more broadly. I describe research over the past two decades where we have rebuilt the unit operations of the QSAR method using improved mathematical techniques, and have applied this valuable platform technology to new important areas of research and industry such as nanoscience, omics technologies, advanced materials, and regenerative medicine. This paper was presented as the 2017 ACS Herman Skolnik lecture.

  13. A New Method for Modeling Spatial Prestressing Tendons

    Science.gov (United States)

    Li, Yi; Wang, Yuqian; Liu, Gao

    2010-05-01

    As a standard simulation procedure for curved lines and curved surfaces, spline has been widely used in the domain of computer-aided design. This paper presents a simple but relatively accurate procedure for the description of prestressing tendons. Cubic splines instead of conventional parabolic ones are introduced to obtain the characteristic parameters of the curved tendon profiles. The direct internal load method is adopted to obtain the equivalent load and loss of tendon force. In comparison with the traditional methods, Cubic splines needs less parameter for pre-processor and leads to higher accuracy in calculation. The direct internal load method can demonstrate the regularity of prestressing force acting on the structure, which modifies the prevalent equivalent load method. The results of the analysis presented in this paper indicate that the proposed method turns out to be convenient and reasonably accurate in the analysis of prestressed concrete bridges.

  14. New Methods for Kinematic Modelling and Calibration of Robots

    DEFF Research Database (Denmark)

    Søe-Knudsen, Rune

    2014-01-01

    Improving a robot's accuracy increases its ability to solve certain tasks, and is therefore valuable. Practical ways of achieving this improved accuracy, even after robot repair, is also valuable. In this work, we introduce methods that improve the robot's accuracy and make it possible to maintain...... the accuracy in an easy and accessible way. The required equipment is accessible, since the cost is held to a minimum and can be made with conventional processing equipment. Our first method calibrates the kinematics of a robot using known relative positions measured with the robot itself and a plate...... with holes matching the robot tool flange. The second method calibrates the kinematics using two robots. This method allows the robots to carry out the collection of measurements and the adjustment, by themselves, after the robots have been connected. Furthermore, we also propose a method for restoring...

  15. Sparse QSAR modelling methods for therapeutic and regenerative medicine.

    Science.gov (United States)

    Winkler, David A

    2018-04-01

    The quantitative structure-activity relationships method was popularized by Hansch and Fujita over 50 years ago. The usefulness of the method for drug design and development has been shown in the intervening years. As it was developed initially to elucidate which molecular properties modulated the relative potency of putative agrochemicals, and at a time when computing resources were scarce, there is much scope for applying modern mathematical methods to improve the QSAR method and to extending the general concept to the discovery and optimization of bioactive molecules and materials more broadly. I describe research over the past two decades where we have rebuilt the unit operations of the QSAR method using improved mathematical techniques, and have applied this valuable platform technology to new important areas of research and industry such as nanoscience, omics technologies, advanced materials, and regenerative medicine. This paper was presented as the 2017 ACS Herman Skolnik lecture.

  16. Estimation of inbreeding rates and extinction risk of forty one Belgian chicken breeds in 2005 and 2010

    OpenAIRE

    Moula, Nassim; Philippe, François-Xavier; Antoine-Moussiaux, Nicolas; Leroy, Pascal; Michaux, Charles

    2014-01-01

    In Belgium, as generally in Europe, the dominant position of the high producing commercial strains specialized in meat or eggs production threats of extinction the local traditional breeds. In this work, a follow up of the changes in populations size, and the rates of inbreeding of the Belgian poultry breeds, has been carried out in 2005 and 2010. About forty breeds were concerned. The Belgian hen breeds being overwhelmingly under threat of extinction, because of the low number of individu...

  17. A default method to specify skeletons for Bayesian model averaging continual reassessment method for phase I clinical trials.

    Science.gov (United States)

    Pan, Haitao; Yuan, Ying

    2017-01-30

    The Bayesian model averaging continual reassessment method (CRM) is a Bayesian dose-finding design. It improves the robustness and overall performance of the continual reassessment method (CRM) by specifying multiple skeletons (or models) and then using Bayesian model averaging to automatically favor the best-fitting model for better decision making. Specifying multiple skeletons, however, can be challenging for practitioners. In this paper, we propose a default way to specify skeletons for the Bayesian model averaging CRM. We show that skeletons that appear rather different may actually lead to equivalent models. Motivated by this, we define a nonequivalence measure to index the difference among skeletons. Using this measure, we extend the model calibration method of Lee and Cheung (2009) to choose the optimal skeletons that maximize the average percentage of correct selection of the maximum tolerated dose and ensure sufficient nonequivalence among the skeletons. Our simulation study shows that the proposed method has desirable operating characteristics. We provide software to implement the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Small-angle physics at the intersecting storage rings forty years later

    International Nuclear Information System (INIS)

    Amaldi, Ugo

    2012-01-01

    It is often said that the ISR did not have the detectors needed to discover fundamental phenomena made accessible by its large and new energy range. This is certainly true for ‘high-momentum-transfer physics’, which, since the end of the 1960s, became a main focus of research, but the statement does not apply to the field that is the subject of this paper. In fact, looking back to the results obtained at the ISR by the experiments that were programmed to study ‘small-angle physics’, one can safely say that the detectors were very well suited to the tasks and performed much better than foreseen. As far as the results are concerned, in this particular corner of hadron–hadron physics, new phenomena were discovered, unexpected scaling laws were found and the first detailed studies of that elusive concept, which goes under the name ‘pomeron’, were performed, opening the way to phenomena that we hope will be observed at the LHC. Moreover, some techniques and methods have had a lasting influence: all colliders had and have their Roman pots, and the different methods developed at the ISR for measuring the luminosity are still in use.

  19. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A

    1997-01-01

    This book is a completely updated, greatly expanded version of the previously successful volume by the author. The Second Edition includes new results and data, and discusses a unified framework and rationale for designing and evaluating image processing algorithms.Written from the viewpoint that image processing supports remote sensing science, this book describes physical models for remote sensing phenomenology and sensors and how they contribute to models for remote-sensing data. The text then presents image processing techniques and interprets them in terms of these models. Spectral, s

  20. Economic-mathematical methods and models under uncertainty

    CERN Document Server

    Aliyev, A G

    2013-01-01

    Brief Information on Finite-Dimensional Vector Space and its Application in EconomicsBases of Piecewise-Linear Economic-Mathematical Models with Regard to Influence of Unaccounted Factors in Finite-Dimensional Vector SpacePiecewise Linear Economic-Mathematical Models with Regard to Unaccounted Factors Influence in Three-Dimensional Vector SpacePiecewise-Linear Economic-Mathematical Models with Regard to Unaccounted Factors Influence on a PlaneBases of Software for Computer Simulation and Multivariant Prediction of Economic Even at Uncertainty Conditions on the Base of N-Comp

  1. CAD ACTIVE MODELS: AN INNOVATIVE METHOD IN ASSEMBLY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    NADDEO Alessandro

    2010-07-01

    Full Text Available The aim of this work is to show the use and the versatility of the active models in different applications. It has been realized an active model of a cylindrical spring and it has been applied in two mechanisms, different for typology and for backlash loads. The first example is a dynamometer in which nthe cylindrical spring is loaded by traction forces, while the second example is made up from a pressure valve in which the cylindrical-conic spring works under compression. The imposition of the loads in both cases, has allowed us to evaluate the model of the mechanism in different working conditions, also in assembly environment.

  2. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  3. Congestion cost allocation method in a pool model

    International Nuclear Information System (INIS)

    Jung, H.S.; Hur, D.; Park, J.K.

    2003-01-01

    The congestion cost caused by transmission capacities and voltage limit is an important issue in a competitive electricity market. To allocate the congestion cost equitably, the active constraints in a constrained dispatch and the sequence of these constraints should be considered. A multi-stage method is proposed which reflects the effects of both the active constraints and the sequence. In a multi-stage method, the types of congestion are analysed in order to consider the sequence, and the relationship between congestion and the active constraints is derived in a mathematical way. The case study shows that the proposed method can give more accurate and equitable signals to customers. (Author)

  4. Thruster Modelling for Underwater Vehicle Using System Identification Method

    Directory of Open Access Journals (Sweden)

    Mohd Shahrieel Mohd Aras

    2013-05-01

    Full Text Available Abstract This paper describes a study of thruster modelling for a remotely operated underwater vehicle (ROV by system identification using Microbox 2000/2000C. Microbox 2000/2000C is an XPC target machine device to interface between an ROV thruster with the MATLAB 2009 software. In this project, a model of the thruster will be developed first so that the system identification toolbox in MATLAB can be used. This project also presents a comparison of mathematical and empirical modelling. The experiments were carried out by using a mini compressor as a dummy depth pressure applied to a pressure sensor. The thruster model will thrust and submerge until it reaches a set point and maintain the set point depth. The depth was based on pressure sensor measurement. A conventional proportional controller was used in this project and the results gathered justified its selection.

  5. Essential medicines: an overview of some milestones in the last forty years (1975-2013

    Directory of Open Access Journals (Sweden)

    José Antonio Pagés

    2013-06-01

    Full Text Available Despite progress in the last four decades in terms of access to essential medicines, more than a third of the world population especially from the poorest countries have serious difficulties in accessing the medicines they need at an affordable price and with the right quality. Already in 1975 the 28th World Health Assembly discussed the need to set recommendations regarding selection and acquisition of medicines at reasonable prices and proven quality to meet national health needs. Consequently in 1977, the first WHO Model List of Essential Medicines was prepared by an expert committee. Since then, the list has been subjected to a series of updating and dissemination processes, together with discussion about the cost, patents and quality of medicines, as well as information on safety and effectiveness of each drug that is listed. The article addresses how this process has evolved from the beginning to present day.

  6. Volumetric fast multipole method for modeling Schroedinger's equation

    International Nuclear Information System (INIS)

    Zhao, Zhiqin; Kovvali, Narayan; Lin, Wenbin; Ahn, Chang-Hoi; Couchman, Luise; Carin, Lawrence

    2007-01-01

    A volume integral equation method is presented for solving Schroedinger's equation for three-dimensional quantum structures. The method is applicable to problems with arbitrary geometry and potential distribution, with unknowns required only in the part of the computational domain for which the potential is different from the background. Two different Green's functions are investigated based on different choices of the background medium. It is demonstrated that one of these choices is particularly advantageous in that it significantly reduces the storage and computational complexity. Solving the volume integral equation directly involves O(N 2 ) complexity. In this paper, the volume integral equation is solved efficiently via a multi-level fast multipole method (MLFMM) implementation, requiring O(N log N) memory and computational cost. We demonstrate the effectiveness of this method for rectangular and spherical quantum wells, and the quantum harmonic oscillator, and present preliminary results of interest for multi-atom quantum phenomena

  7. A numerical method for eigenvalue problems in modeling liquid crystals

    Energy Technology Data Exchange (ETDEWEB)

    Baglama, J.; Farrell, P.A.; Reichel, L.; Ruttan, A. [Kent State Univ., OH (United States); Calvetti, D. [Stevens Inst. of Technology, Hoboken, NJ (United States)

    1996-12-31

    Equilibrium configurations of liquid crystals in finite containments are minimizers of the thermodynamic free energy of the system. It is important to be able to track the equilibrium configurations as the temperature of the liquid crystals decreases. The path of the minimal energy configuration at bifurcation points can be computed from the null space of a large sparse symmetric matrix. We describe a new variant of the implicitly restarted Lanczos method that is well suited for the computation of extreme eigenvalues of a large sparse symmetric matrix, and we use this method to determine the desired null space. Our implicitly restarted Lanczos method determines adoptively a polynomial filter by using Leja shifts, and does not require factorization of the matrix. The storage requirement of the method is small, and this makes it attractive to use for the present application.

  8. Review of methods for modelling forest fire risk and hazard

    African Journals Online (AJOL)

    user

    -Leal et al., 2006). Stolle and Lambin (2003) noted that flammable fuel depends on ... advantages over conventional fire detection and fire monitoring methods because ofits repetitive andconsistent coverage over large areas of land (Martin et ...

  9. The use of solvent extraction in the nuclear fuel cycle, forty years of progress

    International Nuclear Information System (INIS)

    Germain, M.

    1990-01-01

    The high degree of purity required for the fissile and fertile elements used as fuels in nuclear reactors has made solvent extraction the choice as the purification method in the different steps of the fuel cycle. This technique, owing to its specificity, and its adaptability both to continuous multistage processes and to remote control, has served to achieve the requisite purities with safe, reliable operation. A review of the different steps of the cycle including uranium and thorium production, uranium enrichment, reprocessing, and the recovery of transuranics, highlights the diversity of the solvents used and the improvements made to the processes and the equipment. According to the different authors, this technique is capable of meeting future needs, aimed to reduce the harmful effects associated with the nuclear fuel cycle to the lowest possible levels

  10. Forty-five-year follow-up on the renal function after spinal cord injury

    DEFF Research Database (Denmark)

    Elmelund, M; Oturai, P S; Toson, B

    2016-01-01

    STUDY DESIGN: Retrospective chart review. OBJECTIVES: To investigate the extent of renal deterioration in patients with spinal cord injury (SCI) and to identify risk indicators associated with renal deterioration. SETTING: Clinic for Spinal Cord Injuries, Rigshospitalet, Hornbæk, Denmark. METHODS......: This study included 116 patients admitted to our clinic with a traumatic SCI sustained between 1956 and 1975. Results from renography and (51)Cr-EDTA plasma clearance were collected from medical records from time of injury until 2012, and the occurrence of renal deterioration was analysed by cumulative...... increased the risk of moderate and severe renal deterioration. CONCLUSION: Renal deterioration occurs at any time after injury, suggesting that lifelong follow-up examinations of the renal function are important, especially in patients with dilatation of UUT and/or renal/ureter stones....

  11. The Forty-Sixth Euro Congress on Drug Synthesis and Analysis: Snapshot †

    Directory of Open Access Journals (Sweden)

    Pavel Mucaji

    2017-10-01

    Full Text Available The 46th EuroCongress on Drug Synthesis and Analysis (ECDSA-2017 was arranged within the celebration of the 65th Anniversary of the Faculty of Pharmacy at Comenius University in Bratislava, Slovakia from 5–8 September 2017 to get together specialists in medicinal chemistry, organic synthesis, pharmaceutical analysis, screening of bioactive compounds, pharmacology and drug formulations; promote the exchange of scientific results, methods and ideas; and encourage cooperation between researchers from all over the world. The topic of the conference, “Drug Synthesis and Analysis,” meant that the symposium welcomed all pharmacists and/or researchers (chemists, analysts, biologists and students interested in scientific work dealing with investigations of biologically active compounds as potential drugs. The authors of this manuscript were plenary speakers and other participants of the symposium and members of their research teams. The following summary highlights the major points/topics of the meeting.

  12. Transient thermal modelling of ball bearing using finite element method

    OpenAIRE

    Sibilli, Thierry; Igie, Uyioghosa

    2017-01-01

    Gas turbines are fitted with rolling element bearings, which transfer loads and supports the shafts. The interaction between the rotating and stationary parts in the bearing causes a conversion of some of the power into heat, influencing the thermal behaviour of the entire bearing chamber. To improve thermal modelling of bearing chambers, this work focused on modelling of the heat generated and dissipated around the bearings, in terms of magnitude and location, and the interaction with the co...

  13. A Traceability-based Method to Support Conceptual Model Evolution

    OpenAIRE

    Ruiz Carmona, Luz Marcela

    2014-01-01

    Renewing software systems is one of the most cost-effective ways to protect software investment, which saves time, money and ensures uninter- rupted access to technical support and product upgrades. There are several mo- tivations to promote investment and scientific effort for specifying systems by means of conceptual models and supporting its evolution. As an example, the software engineering community is addressing solutions for supporting model traceability, continuous improvement of busi...

  14. Diffusion models in metamorphic thermo chronology: philosophy and methods

    International Nuclear Information System (INIS)

    Munha, Jose Manuel; Tassinari, Colombo Celso Gaeta

    1999-01-01

    Understanding kinetics of diffusion is of major importance to the interpretation of isotopic ages in metamorphic rocks. This paper provides a review of concepts and methodologies involved on the various diffusion models that can be applied to radiogenic systems in cooling rocks. The central concept of closure temperature is critically discussed and quantitative estimates for the various diffusion models are evaluated, in order to illustrate the controlling factors and the limits of their practical application. (author)

  15. Forty research issues for the redesign of animal production systems in the 21st century.

    Science.gov (United States)

    Dumont, B; González-García, E; Thomas, M; Fortun-Lamothe, L; Ducrot, C; Dourmad, J Y; Tichit, M

    2014-08-01

    Agroecology offers a scientific and operational framework for redesigning animal production systems (APS) so that they better cope with the coming challenges. Grounded in the stimulation and valorization of natural processes to reduce inputs and pollutions in agroecosystems, it opens a challenging research agenda for the animal science community. In this paper, we identify key research issues that define this agenda. We first stress the need to assess animal robustness by measurable traits, to analyze trade-offs between production and adaptation traits at within-breed and between-breed level, and to better understand how group selection, epigenetics and animal learning shape performance. Second, we propose research on the nutritive value of alternative feed resources, including the environmental impacts of producing these resources and their associated non-provisioning services. Third, we look at how the design of APS based on agroecological principles valorizes interactions between system components and promotes biological diversity at multiple scales to increase system resilience. Addressing such challenges requires a collection of theories and models (concept-knowledge theory, viability theory, companion modeling, etc.). Acknowledging the ecology of contexts and analyzing the rationales behind traditional small-scale systems will increase our understanding of mechanisms contributing to the success or failure of agroecological practices and systems. Fourth, the large-scale development of agroecological products will require analysis of resistance to change among farmers and other actors in the food chain. Certifications and market-based incentives could be an important lever for the expansion of agroecological alternatives in APS. Finally, we question the suitability of current agriculture extension services and public funding mechanisms for scaling-up agroecological practices and systems.

  16. Clear-Air Propagation Modeling using Parabolic Equation Method

    Directory of Open Access Journals (Sweden)

    V. Kvicera

    2003-12-01

    Full Text Available Propagation of radio waves under clear-air conditions is affected bythe distribution of atmospheric refractivity between the transmitterand the receiver. The measurement of refractivity was carried out onthe TV Tower Prague to access evolution of a refractivity profile. Inthis paper, the parabolic equation method is used in modelingpropagation of microwaves when using the measured data. This paperbriefly describes the method and shows some practical results ofsimulation of microwave propagation using real vertical profiles ofatmospheric refractivity.

  17. The Empirical Economist's Toolkit: From Models to Methods

    OpenAIRE

    Panhans, Matthew T.; Singleton, John D.

    2015-01-01

    While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward \\quasi-experimental" methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a \\credibility revolution" in econometrics that has nally provided persuasive answers to a diverse set of questions. Particularly in uential in the applied areas of labor, education, public, and health economics, the meth...

  18. Study on geological environment model using geostatistics method

    International Nuclear Information System (INIS)

    Honda, Makoto; Suzuki, Makoto; Sakurai, Hideyuki; Iwasa, Kengo; Matsui, Hiroya

    2005-03-01

    The purpose of this study is to develop the geostatistical procedure for modeling geological environments and to evaluate the quantitative relationship between the amount of information and the reliability of the model using the data sets obtained in the surface-based investigation phase (Phase 1) of the Horonobe Underground Research Laboratory Project. This study lasts for three years from FY2004 to FY2006 and this report includes the research in FY2005 as the second year of three-year study. In FY2005 research, the hydrogeological model was built as well as FY2004 research using the data obtained from the deep boreholes (HDB-6, 7 and 8) and the ground magnetotelluric (AMT) survey which were executed in FY2004 in addition to the data sets used in the first year of study. Above all, the relationship between the amount of information and the reliability of the model was demonstrated through a comparison of the models at each step which corresponds to the investigation stage in each FY. Furthermore, the statistical test was applied for detecting the difference of basic statistics of various data due to geological features with a view to taking the geological information into the modeling procedures. (author)

  19. The Blended Finite Element Method for Multi-fluid Plasma Modeling

    Science.gov (United States)

    2016-07-01

    Briefing Charts 3. DATES COVERED (From - To) 07 June 2016 - 01 July 2016 4. TITLE AND SUBTITLE The Blended Finite Element Method for Multi-fluid Plasma...BLENDED FINITE ELEMENT METHOD FOR MULTI-FLUID PLASMA MODELING Éder M. Sousa1, Uri Shumlak2 1ERC INC., IN-SPACE PROPULSION BRANCH (RQRS) AIR FORCE RESEARCH...MULTI-FLUID PLASMA MODEL 2 BLENDED FINITE ELEMENT METHOD Blended Finite Element Method Nodal Continuous Galerkin Modal Discontinuous Galerkin Model

  20. A Default Method to Specify Skeletons for Bayesian Model Averaging Continual Reassessment Method for Phase I Clinical Trials

    Science.gov (United States)

    Pan, Haitao; Yuan, Ying

    2016-01-01

    The Bayesian model averaging continual reassessment method (BMA-CRM) is an extension of the continual reassessment method (CRM) for dose finding. The BMA-CRM improves the robustness and overall performance of the CRM by specifying multiple skeletons (or models) and then using Bayesian model averaging to automatically favor the best fitting model for robust decision making. Specifying multiple skeletons, however, can be challenging for practitioners. In this paper, we propose a default way to specify skeletons for the BMA-CRM. We show that skeletons that appear rather different may actually lead to equivalent models. Motivated by this, we define a nonequivalence measure to index the difference among skeletons. Using this measure, we extend the model calibration method of Lee and Cheung (2009) to choose the optimal skeletons that maximize the average percentage of correct selection of the maximum tolerated dose and ensure sufficient nonequivalence among the skeletons. Our simulation study shows that the proposed method has desirable operating characteristics. We provide software to implement the proposed method. PMID:26991076

  1. Forty million years of mutualism: Evidence for Eocene origin of the yucca-yucca moth association

    Science.gov (United States)

    Pellmyr, Olle; Leebens-Mack, James

    1999-01-01

    The obligate mutualism between yuccas and yucca moths is a major model system for the study of coevolving species interactions. Exploration of the processes that have generated current diversity and associations within this mutualism requires robust phylogenies and timelines for both moths and yuccas. Here we establish a molecular clock for the moths based on mtDNA and use it to estimate the time of major life history events within the yucca moths. Colonization of yuccas had occurred by 41.5 ± 9.8 million years ago (Mya), with rapid life history diversification and the emergence of pollinators within 0–6 My after yucca colonization. A subsequent burst of diversification 3.2 ± 1.8 Mya coincided with evolution of arid habitats in western North America. Derived nonpollinating cheater yucca moths evolved 1.26 ± 0.96 Mya. The estimated age of the moths far predates the host fossil record, but is consistent with suggested host age based on paleobotanical, climatological, biogeographical, and geological data, and a tentative estimation from an rbcL-based molecular clock for yuccas. The moth data are used to establish three alternative scenarios of how the moths and plants have coevolved. They yield specific predictions that can be tested once a robust plant phylogeny becomes available. PMID:10430916

  2. From Precaution to Peril: Public Relations Across Forty Years of Genetic Engineering.

    Science.gov (United States)

    Hogan, Andrew J

    2016-12-01

    The Asilomar conference on genetic engineering in 1975 has long been pointed to by scientists as a model for internal regulation and public engagement. In 2015, the organizers of the International Summit on Human Gene Editing in Washington, DC looked to Asilomar as they sought to address the implications of the new CRISPR gene editing technique. Like at Asilomar, the conveners chose to limit the discussion to a narrow set of potential CRISPR applications, involving inheritable human genome editing. The adoption by scientists in 2015 of an Asilomar-like script for discussing genetic engineering offers historians the opportunity to analyze the adjustments that have been made since 1975, and to identify the blind spots that remain in public engagement. Scientists did take important lessons from the fallout of their limited engagement with public concerns at Asilomar. Nonetheless, the scientific community has continued to overlook some of the longstanding public concerns about genetic engineering, in particular the broad and often covert genetic modification of food products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Forty years of erratic insecticide resistance evolution in the mosquito Culex pipiens.

    Directory of Open Access Journals (Sweden)

    Pierrick Labbé

    2007-11-01

    Full Text Available One view of adaptation is that it proceeds by the slow and steady accumulation of beneficial mutations with small effects. It is difficult to test this model, since in most cases the genetic basis of adaptation can only be studied a posteriori with traits that have evolved for a long period of time through an unknown sequence of steps. In this paper, we show how ace-1, a gene involved in resistance to organophosphorous insecticide in the mosquito Culex pipiens, has evolved during 40 years of an insecticide control program. Initially, a major resistance allele with strong deleterious side effects spread through the population. Later, a duplication combining a susceptible and a resistance ace-1 allele began to spread but did not replace the original resistance allele, as it is sublethal when homozygous. Last, a second duplication, (also sublethal when homozygous began to spread because heterozygotes for the two duplications do not exhibit deleterious pleiotropic effects. Double overdominance now maintains these four alleles across treated and nontreated areas. Thus, ace-1 evolution does not proceed via the steady accumulation of beneficial mutations. Instead, resistance evolution has been an erratic combination of mutation, positive selection, and the rearrangement of existing variation leading to complex genetic architecture.

  4. Theory, Solution Methods, and Implementation of the HERMES Model

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, John E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Bradley W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Curtis, John P. [Atomic Weapons Establishment (AWE), Reading, Berkshire (United Kingdom); Univ. College London (UCL), Gower Street, London (United Kingdom); Springer, H. Keo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-13

    The HERMES (high explosive response to mechanical stimulus) model was developed over the past decade to enable computer simulation of the mechanical and subsequent energetic response of explosives and propellants to mechanical insults such as impacts, perforations, drops, and falls. The model is embedded in computer simulation programs that solve the non-linear, large deformation equations of compressible solid and fluid flow in space and time. It is implemented as a user-defined model, which returns the updated stress tensor and composition that result from the simulation supplied strain tensor change. Although it is multi-phase, in that gas and solid species are present, it is single-velocity, in that the gas does not flow through the porous solid. More than 70 time-dependent variables are made available for additional analyses and plotting. The model encompasses a broad range of possible responses: mechanical damage with no energetic response, and a continuous spectrum of degrees of violence including delayed and prompt detonation. This paper describes the basic workings of the model.

  5. The International Nuclear Information System. The first forty years 1970-2010

    International Nuclear Information System (INIS)

    Todeschini, Claudio

    2010-10-01

    The Statute of the IAEA came into force in July 1957. It was with the desire to more adequately fulfill the statutory function that during the 1960's the Agency began exploring the possibility of establishing a scheme that would provide computerized access to a comprehensive collection of references to the world's nuclear literature. The outcome of these efforts was the establishment of the International Nuclear Information System (INIS) that produced its first products in May 1970. The system was designed as an international cooperative venture, requiring the active participation of its members. It started operations with 25 members and the success and usefulness of the system has been proven by the fact that present membership is 146. The present report describes the road that led to the creation of INIS. It also describes the present operation of the system, the current methods used to collect and process the data on nuclear literature and the various products and services that the system places at the disposal of its users. Furthermore, it gives insights into current thinking for future developments that will facilitate access to an increasing variety of nuclear related information available from the IAEA, bibliographic and numerical data, full text of published and 'grey literature', multilingual nuclear terminology information as well as facilitate access to other sources of nuclear related information maintained outside the IAEA

  6. The International Nuclear Information System. The first forty years 1970-2010 (Translated document)

    International Nuclear Information System (INIS)

    Itabashi, Keizo

    2010-10-01

    The Statute of the IAEA that came into force in July 1957. It was with the desire to more adequately fulfill the statutory function that during the 1960's the Agency began exploring the possibility of establishing a scheme that would provide computerized access to a comprehensive collection of references to the world's nuclear literature. The outcome of these efforts was the establishment of the International Nuclear Information System (INIS) and produced its first products in May 1970. The system was designed as an international cooperative venture, requiring the active participation of its members. It started operations with 25 members and the success and usefulness of the system has been proven by the fact that present membership is 146. The present report describes the road that led to the creation of INIS. It also describes the present operation of the system, the current methods used to collect and process the data on nuclear literature and the various products and services that the system places at the disposal of its users. Furthermore, it gives insights into current thinking for future developments that will facilitate access to an increasing variety of nuclear related information available from the IAEA, bibliographic and numerical data, full text of published and 'grey literature', multilingual nuclear terminological information as well as facilitate access to other sources of nuclear related information maintained outside the IAEA. (author)

  7. Coupled Finite Volume Methods and Extended Finite Element Methods for the Dynamic Crack Propagation Modelling with the Pressurized Crack Surfaces

    Directory of Open Access Journals (Sweden)

    Shouyan Jiang

    2017-01-01

    Full Text Available We model the fluid flow within the crack as one-dimensional flow and assume that the flow is laminar; the fluid is incompressible and accounts for the time-dependent rate of crack opening. Here, we discretise the flow equation by finite volume methods. The extended finite element methods are used for solving solid medium with crack under dynamic loads. Having constructed the approximation of dynamic extended finite element methods, the derivation of governing equation for dynamic extended finite element methods is presented. The implicit time algorithm is elaborated for the time descritisation of dominant equation. In addition, the interaction integral method is given for evaluating stress intensity factors. Then, the coupling model for modelling hydraulic fracture can be established by the extended finite element methods and the finite volume methods. We compare our present numerical results with our experimental results for verifying the proposed model. Finally, we investigate the water pressure distribution along crack surface and the effect of water pressure distribution on the fracture property.

  8. Learning Methods for Dynamic Topic Modeling in Automated Behavior Analysis.

    Science.gov (United States)

    Isupova, Olga; Kuzin, Danil; Mihaylova, Lyudmila

    2017-09-27

    Semisupervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators' load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this paper proposes new learning algorithms for activity analysis in video. The activities and behaviors are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximization approach and variational Bayes inference are proposed. Theoretical derivations of the posterior estimates of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localization procedure, elegantly embedded in the topic modeling framework. It is shown that the developed learning algorithms can achieve 95% success rate. The proposed framework can be applied to a number of areas, including transportation systems, security, and surveillance.

  9. Modeling Multi-commodity Trade Information Exchange Methods

    CERN Document Server

    Traczyk, Tomasz

    2012-01-01

    Market mechanisms are entering into new fields of economy, in which some constraints of physical world, e.g. Kirchoffs Law in power grid, must be taken into account during trading. On such markets, some of commodities, like telecommunication bandwidth or electrical energy, appear to be non-storable, and must be exchanged in real-time. On the other hand, the markets tend to react at shortest possible time, so an idea to delegate some competency to autonomous software agents is very attractive. Multi-commodity mechanism addresses the aforementioned requirements. Modeling the relationships between the commodities allows to formulate new, more sophisticated models and mechanisms, which reflect decision situations in a better manner. Application of multi-commodity approach requires solving several issues related to data modeling, communication, semantics aspects of communication, reliability, etc. This book answers some of the questions and points out promising paths for implementation and development. Presented s...

  10. A method for studying stability domains in physical models

    Science.gov (United States)

    Gallas, Jason A. C.

    1994-10-01

    We present a method for investigating the simultaneous movement of all zeros of equations of motions defined by discrete mappings. The method is used to show that knowledge of the interplay of all zeros is of fundamental importance for establishing periodicities and relative stability properties of the various possible physical solutions. The method is also used (i) to show that the Frontière set of Fatou is defined primarily by zeros of functions leading to an entire invariant limiting function which underlies every dynamical system, (ii) to identify cyclotomic polynomials as components of the limiting function obtained for a parameter value supporting a particular superstable orbit of the quadratic map, (iii) to describe highly symmetric periodic cycles embedded in these components, and (iv) to provide an unified picture about which mathematical objects form basin boundaries of dynamical systems in general: the closure of all zeros not belonging to “stable” orbits.

  11. An immersed boundary method for modeling a dirty geometry data

    Science.gov (United States)

    Onishi, Keiji; Tsubokura, Makoto

    2017-11-01

    We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.

  12. A method to couple HEM and HRM two-phase flow models

    International Nuclear Information System (INIS)

    Herard, J.M.; Hurisse, O.; Hurisse, O.; Ambroso, A.

    2009-01-01

    We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)

  13. Methods of mathematical modeling using polynomials of algebra of sets

    Science.gov (United States)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  14. CAD ACTIVE MODELS: AN INNOVATIVE METHOD IN ASSEMBLY ENVIRONMENT

    OpenAIRE

    NADDEO Alessandro; CAPPETTI Nicola; PAPPALARDO Michele

    2010-01-01

    The aim of this work is to show the use and the versatility of the active models in different applications. It has been realized an active model of a cylindrical spring and it has been applied in two mechanisms, different for typology and for backlash loads. The first example is a dynamometer in which nthe cylindrical spring is loaded by traction forces, while the second example is made up from a pressure valve in which the cylindrical-conic spring works under compression. The imposition of t...

  15. Orbital Trajectory Simulation on Twin Stars System in Ifs Fractal Model Based on Hybrid Animation Method

    OpenAIRE

    Darmanto, Tedjo; Suwardi, Iping Supriana; Munir, Rinaldi

    2015-01-01

    IFS (Iterated Function Systems) is a method to model fractal object based on affine transformation functions. The star-like object rotation effect in the IFS fractal model could be exhibited by using metamorphical method, as a replacement to the affine rotation method on a non metamorphic animation. The advantage of a metamorphic animation method over the metamorphic animation method is that the object's relative position to the fixed point as an absolute centroid is absolute. Therefore, the ...

  16. [Revista de Saúde Pública: forty years of Brazilian scientific production].

    Science.gov (United States)

    Pereira, Júlio Cesar Rodrigues

    2006-08-01

    To recognize the characteristics and path taken by the through analysis of the scientific production it has published over the period from 1967 to 2005. Scientometric methods were used to analyze reference data on the articles published in the Revista, retrieved from the databases ISI/Thomson Scientific (Web of Science), National Library of Medicine (PubMed) and Scientific Electronic Library Online (SciELO). The Revista is the only Brazilian publication in the field of public health that is indexed by ISI/Thomson Scientific. It is prominent as a medium for publishing Brazilian scientific production in public health and is displaying a geometric increase in publication and citation, with annual rates of 4.4% and 12.7%, respectively. The mean number of authors per paper has risen from 2 to 3.5 over recent years. Although original research articles predominate, the numbers of reviews, multicenter studies, clinical trials and validation studies have been increasing. The number of articles published in foreign languages has also increased, accounting for 13% of the total, and the leading countries originating these are the UK, USA, Argentina and Mexico. The number and diversity of journals citing the Journal has been increasing, many of which are non-Brazilian. Authorship per author shows good fit to Lotka's Law, but the parameters suggest greater concentration and less dispersion than would be expected. Among the fields of interest of published papers, the following topics account for more than 50% of the total volume: infectious-parasitic diseases and vectors; health promotion, policies and administration; and epidemiology, surveillance and disease control. The Revista shows great dynamism, without signs of abating or reaching a plateau any time soon. There are signs of progressively increasing complexity in the studies published, and more multidisciplinary work. The Revista seems to be widening its outreach and recognition, while remaining faithful to the field of

  17. Consistent and Clear Reporting of Results from Diverse Modeling Techniques: The A3 Method

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    2015-08-01

    Full Text Available The measurement and reporting of model error is of basic importance when constructing models. Here, a general method and an R package, A3, are presented to support the assessment and communication of the quality of a model fit along with metrics of variable importance. The presented method is accurate, robust, and adaptable to a wide range of predictive modeling algorithms. The method is described along with case studies and a usage guide. It is shown how the method can be used to obtain more accurate models for prediction and how this may simultaneously lead to altered inferences and conclusions about the impact of potential drivers within a system.

  18. A method to determine a viable energy business model

    NARCIS (Netherlands)

    Austin D' Souza

    2014-01-01

    A business model describes how business is carried out; it includes a description of the stakeholders, their roles, value proposition for the stakeholders involved, and the underlying logic of value creation, value exchange, and value capture at an organisational level, and at a network level

  19. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  20. Methods of Detecting Outliers in A Regression Analysis Model ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... This is the type of linear regression that involves only two variables one independent and one dependent plus the random error term. The simple linear regression model assumes that there is a straight line (linear) relationship between the dependent variable Y and the independent variable X. This can be.

  1. Modelling regional land use: the quest for the appropriate method

    NARCIS (Netherlands)

    Soltani Largani, A.

    2013-01-01

    The demand for spatially-explicit predictions of regional crop-yield patterns is increasing. An approach to assess a priori and/or future ranges of alternative scenarios spatial yield patterns at the regional scale is the application of mechanistic crop growth simulation models (CGSMs) (e.g.

  2. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  3. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  4. a new analytical modeling method for photovoltaic solar cells based

    African Journals Online (AJOL)

    Zieba Falama R, Dadjé A, Djongyang N and Doka S.Y

    2016-05-01

    May 1, 2016 ... network. However, its exploitation requires a conception and implantation of a production system which also require a good sizing in order to avoid losses or lack of energy. The evaluation of the maximum power produced by a PV generator is very important to size a. PV system. Several models have been ...

  5. Analysis of spin and gauge models with variational methods

    International Nuclear Information System (INIS)

    Dagotto, E.; Masperi, L.; Moreo, A.; Della Selva, A.; Fiore, R.

    1985-01-01

    Since independent-site (link) or independent-link (plaquette) variational states enhance the order or the disorder, respectively, in the treatment of spin (gauge) models, we prove that mixed states are able to improve the critical coupling while giving the qualitatively correct behavior of the relevant parameters

  6. Mapping research questions about translation to methods, measures, and models

    NARCIS (Netherlands)

    Berninger, V.; Rijlaarsdam, G.; Fayol, M.L.; Fayol, M.; Alamargot, D.; Berninger, V.W.

    2012-01-01

    About the book: Translation of cognitive representations into written language is one of the most important processes in writing. This volume provides a long-awaited updated overview of the field. The contributors discuss each of the commonly used research methods for studying translation; theorize

  7. A Survey of Procedural Methods for Terrain Modelling

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.

    2009-01-01

    Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper

  8. Model films of cellulose. I. Method development and initial results

    NARCIS (Netherlands)

    Gunnars, S.; Wågberg, L.; Cohen Stuart, M.A.

    2002-01-01

    This report presents a new method for the preparation of thin cellulose films. NMMO (N- methylmorpholine- N-oxide) was used to dissolve cellulose and addition of DMSO (dimethyl sulfoxide) was used to control viscosity of the cellulose solution. A thin layer of the cellulose solution is spin- coated

  9. Numerical modeling of isothermal compositional grading by convex splitting methods

    KAUST Repository

    Li, Yiteng

    2017-04-09

    In this paper, an isothermal compositional grading process is simulated based on convex splitting methods with the Peng-Robinson equation of state. We first present a new form of gravity/chemical equilibrium condition by minimizing the total energy which consists of Helmholtz free energy and gravitational potential energy, and incorporating Lagrange multipliers for mass conservation. The time-independent equilibrium equations are transformed into a system of transient equations as our solution strategy. It is proved our time-marching scheme is unconditionally energy stable by the semi-implicit convex splitting method in which the convex part of Helmholtz free energy and its derivative are treated implicitly and the concave parts are treated explicitly. With relaxation factor controlling Newton iteration, our method is able to converge to a solution with satisfactory accuracy if a good initial estimate of mole compositions is provided. More importantly, it helps us automatically split the unstable single phase into two phases, determine the existence of gas-oil contact (GOC) and locate its position if GOC does exist. A number of numerical examples are presented to show the performance of our method.

  10. Review of methods for modelling forest fire risk and hazard

    African Journals Online (AJOL)

    user

    monitoring methods because ofits repetitive andconsistent coverage over large areas of land (Martin et al., 1999). .... situations however, precipitation does not follow this rule. A similar use of elevation factor for forest fire estimation .... point in the formulation of an emergency plan. In carrying out the risk assessment it will be ...

  11. The forty years of vermicular graphite cast iron development in China (PartⅠ

    Directory of Open Access Journals (Sweden)

    CHEN Zheng-de

    2007-05-01

    Full Text Available In China, the research and development of vermicular graphite cast iron (VGCI as a new type of engineering material, were started in the same period as in other developed countries; however, its actual industrial application was even earlier. In China, the deep and intensive studies on VGCI began as early as the 1960s. According to the incomplete statistics to date, more than 600 papers on VGCI have been published by Chinese researchers and scholars at national and international conferences, and in technical journals. More than ten types of production methods and more than thirty types of treatment alloy have been studied. Formulae for calculating the critical addition of treatment alloy required to produce VGCI have been put forward, and mechanisms for explaining the formation of dross during treatment were brought forward. The casting properties, metallographic structure, mechanical and physical properties and machining performance of VGCI, as well as the relationships between them, have all been studied in detail. The Chinese Standards for VGCI and VGCI metallographic structure have been issued. In China, the primary crystallization of VGCI has been studied by many researchers and scholars. The properties of VGCI can be improved by heat treatment and addition of alloying elements enabling its applications to be further expanded. Hundreds of kinds of VGCI castings have been produced and used in vehicles, engines, mining equipment, metallurgical products serviced under alternating thermal load, machinery, hydraulic components, textile machine parts and military applications. The heaviest VGCI casting produced is 38 tons and the lightest is only 1 kg. Currently, the annual production of the VGCI in China is about 200 000 tons. The majority of castings are made from cupola iron without pre-treatment, however, they are also produced from electric furnaces and by duplex melting from cupolaelectric furnaces or blast furnace-electric furnace

  12. The forty years of vermicular graphite cast iron development in China (Part Ⅲ

    Directory of Open Access Journals (Sweden)

    QIU han-quan

    2007-11-01

    Full Text Available In China, the research and development of vermicular graphite cast iron (VGCI as a new type of engineering material, were started in the same period as in other developed countries; however, its actual industrial application was even earlier. In China, the deep and intensive studies on VGCI began as early as the 1960s. According to the incomplete statistics to date, more than 600 papers on VGCI have been published by Chinese researchers and scholars at national and international conferences, and in technical journals. More than ten types of production methods and more than thirty types of treatment alloy have been studied. Formulae for calculating the critical addition of treatment alloy required to produce VGCI have been put forward, and mechanisms for explaining the formation of dross during treatment were brought forward. The casting properties, metallographic structure, mechanical and physical properties and machining performance of VGCI, as well as the relationships between them, have all been studied in detail. The Chinese Standards for VGCI and VGCI metallographic structure have been issued. In China, the primary crystallization of VGCI has been studied by many researchers and scholars. The properties of VGCI can be improved by heat treatment and addition of alloying elements enabling its applications to be further expanded. Hundreds of kinds of VGCI castings have been produced and used in vehicles, engines, mining equipment, metallurgical products serviced under alternating thermal load, machinery, hydraulic components, textile machine parts and military applications. The heaviest VGCI casting produced is 38 tons and the lightest is only 1 kg. Currently, the annual production of the VGCI in China is about 200 000 tons. The majority of castings are made from cupola iron without pre-treatment, however, they are also produced from electric furnaces and by duplex melting from cupolaelectric furnaces or blast furnace-electric furnace

  13. The forty years of vermicular graphite cast iron development in China (Part 2

    Directory of Open Access Journals (Sweden)

    CHEN Zheng-de

    2007-08-01

    Full Text Available In China, the research and development of vermicular graphite cast iron (VGCI as a new type of engineering material, were started in the same period as in other developed countries; however, its actual industrial application was even earlier. In China, the deep and intensive studies on VGCI began as early as the 1960s. According to the incomplete statistics to date, more than 600 papers on VGCI have been published by Chinese researchers and scholars at national and international conferences, and in technical journals. More than ten types of production methods and more than thirty types of treatment alloy have been studied. Formulae for calculating the critical addition of treatment alloy required to produce VGCI have been put forward, and mechanisms for explaining the formation of dross during treatment were brought forward. The casting properties, metallographic structure, mechanical and physical properties and machining performance of VGCI, as well as the relationships between them, have all been studied in detail. The Chinese Standards for VGCI and VGCI metallographic structure have been issued. In China, the primary crystallization of VGCI has been studied by many researchers and scholars. The properties of VGCI can be improved by heat treatment and addition of alloying elements enabling its applications to be further expanded. Hundreds of kinds of VGCI castings have been produced and used in vehicles, engines, mining equipment, metallurgical products serviced under alternating thermal load, machinery, hydraulic components, textile machine parts and military applications. The heaviest VGCI casting produced is 38 tons and the lightest is only 1 kg. Currently, the annual production of the VGCI in China is about 200 000 tons. The majority of castings are made from cupola iron without pre-treatment, however, they are also produced from electric furnaces and by duplex melting from cupolaelectric furnaces or blast furnace-electric furnace

  14. Runners in their forties dominate ultra-marathons from 50 to 3,100 miles

    Science.gov (United States)

    Zingg, Matthias Alexander; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald; Knechtle, Beat

    2014-01-01

    OBJECTIVES: This study investigated performance trends and the age of peak running speed in ultra-marathons from 50 to 3,100 miles. METHODS: The running speed and age of the fastest competitors in 50-, 100-, 200-, 1,000- and 3,100-mile events held worldwide from 1971 to 2012 were analyzed using single- and multi-level regression analyses. RESULTS: The number of events and competitors increased exponentially in 50- and 100-mile events. For the annual fastest runners, women improved in 50-mile events, but not men. In 100-mile events, both women and men improved their performance. In 1,000-mile events, men became slower. For the annual top ten runners, women improved in 50- and 100-mile events, whereas the performance of men remained unchanged in 50- and 3,100-mile events but improved in 100-mile events. The age of the annual fastest runners was approximately 35 years for both women and men in 50-mile events and approximately 35 years for women in 100-mile events. For men, the age of the annual fastest runners in 100-mile events was higher at 38 years. For the annual fastest runners of 1,000-mile events, the women were approximately 43 years of age, whereas for men, the age increased to 48 years of age. For the annual fastest runners of 3,100-mile events, the age in women decreased to 35 years and was approximately 39 years in men. CONCLUSION: The running speed of the fastest competitors increased for both women and men in 100-mile events but only for women in 50-mile events. The age of peak running speed increased in men with increasing race distance to approximately 45 years in 1,000-mile events, whereas it decreased to approximately 39 years in 3,100-mile events. In women, the upper age of peak running speed increased to approximately 51 years in 3,100-mile events. PMID:24626948

  15. Reconstructing Holocene geomagnetic field variation: new methods, models and implications

    Science.gov (United States)

    Nilsson, Andreas; Holme, Richard; Korte, Monika; Suttie, Neil; Hill, Mimi

    2014-07-01

    Reconstructions of the Holocene geomagnetic field and how it varies on millennial timescales are important for understanding processes in the core but may also be used to study long-term solar-terrestrial relationships and as relative dating tools for geological and archaeological archives. Here, we present a new family of spherical harmonic geomagnetic field models spanning the past 9000 yr based on magnetic field directions and intensity stored in archaeological artefacts, igneous rocks and sediment records. A new modelling strategy introduces alternative data treatments with a focus on extracting more information from sedimentary data. To reduce the influence of a few individual records all sedimentary data are resampled in 50-yr bins, which also means that more weight is given to archaeomagnetic data during the inversion. The sedimentary declination data are treated as relative values and adjusted iteratively based on prior information. Finally, an alternative way of treating the sediment data chronologies has enabled us to both assess the likely range of age uncertainties, often up to and possibly exceeding 500 yr and adjust the timescale of each record based on comparisons with predictions from a preliminary model. As a result of the data adjustments, power has been shifted from quadrupole and octupole to higher degrees compared with previous Holocene geomagnetic field models. We find evidence for dominantly westward drift of northern high latitude high intensity flux patches at the core mantle boundary for the last 4000 yr. The new models also show intermittent occurrence of reversed flux at the edge of or inside the inner core tangent cylinder, possibly originating from the equator.

  16. Do different methods of modeling statin treatment effectiveness influence the optimal decision?

    NARCIS (Netherlands)

    B.J.H. van Kempen (Bob); B.S. Ferket (Bart); A. Hofman (Albert); S. Spronk (Sandra); E.W. Steyerberg (Ewout); M.G.M. Hunink (Myriam)

    2012-01-01

    textabstractPurpose. Modeling studies that evaluate statin treatment for the prevention of cardiovascular disease (CVD) use different methods to model the effect of statins. The aim of this study was to evaluate the impact of using different modeling methods on the optimal decision found in such

  17. A Generic Bilevel Formalism for Unifying and Extending Model Reduction Methods

    Science.gov (United States)

    2000-09-29

    An abstract, algebraic bilevel version of conventional multigrid methods has been developed that formally unifies and extends the reduced basis...method, plays the role of a fine grid model. Conventional multigrid methods can be thought of as an extension of the coarse grid model beyond the

  18. Comparison of model reference and map based control method for vehicle stability enhancement

    NARCIS (Netherlands)

    Baek, S.; Son, M.; Song, J.; Boo, K.; Kim, H.

    2012-01-01

    A map based controller method to improve a vehicle lateral stability is proposed in this study and compared with the conventional method, a model referenced controller. A model referenced controller to determine compensated yaw moment uses the sliding mode method, but the proposed map based

  19. Using the QUAIT Model to Effectively Teach Research Methods Curriculum to Master's-Level Students

    Science.gov (United States)

    Hamilton, Nancy J.; Gitchel, Dent

    2017-01-01

    Purpose: To apply Slavin's model of effective instruction to teaching research methods to master's-level students. Methods: Barriers to the scientist-practitioner model (student research experience, confidence, and utility value pertaining to research methods as well as faculty research and pedagogical incompetencies) are discussed. Results: The…

  20. Quick Method for Aeroelastic and Finite Element Modeling of Wind Turbine Blades

    DEFF Research Database (Denmark)

    Bennett, Jeffrey; Bitsche, Robert; Branner, Kim

    2014-01-01

    In this paper a quick method for modeling composite wind turbine blades is developed for aeroelastic simulations and finite element analyses. The method reduces the time to model a wind turbine blade by automating the creation of a shell finite element model and running it through a cross...... the user has two models of the same blade, one for performing a structural finite element model analysis and one for aeroelastic simulations. Here, the method is implemented and applied to reverse engineer a structural layup for the NREL 5MW reference blade. The model is verified by comparing natural...

  1. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  2. Forty years of improvements in European air quality: regional policy-industry interactions with global impacts

    Directory of Open Access Journals (Sweden)

    M. Crippa

    2016-03-01

    role that technology has played in reducing emissions in 2010. However, stagnation of energy consumption at 1970 levels, but with 2010 fuel mix and energy efficiency, and assuming current (year 2010 technology and emission control standards, would have lowered today's NOx emissions by ca. 38 %, SO2 by 50 % and PM2.5 by 12 % in Europe. A reduced-form chemical transport model is applied to calculate regional and global levels of aerosol and ozone concentrations and to assess the associated impact of air quality improvements on human health and crop yield loss, showing substantial impacts of EU technologies and standards inside as well as outside Europe. We assess that the interplay of policy and technological advance in Europe had substantial benefits in Europe, but also led to an important improvement of particulate matter air quality in other parts of the world.

  3. Forty years experience in developing and using rainfall simulators under tropical and Mediterranean conditions

    Science.gov (United States)

    Pla-Sentís, Ildefonso; Nacci, Silvana

    2010-05-01

    Rainfall simulation has been used as a practical tool for evaluating the interaction of falling water drops on the soil surface, to measure both stability of soil aggregates to drop impact and water infiltration rates. In both cases it is tried to simulate the effects of natural rainfall, which usually occurs at very different, variable and erratic rates and intensities. One of the main arguments against the use of rainfall simulators is the difficulty to reproduce the size, final velocity and kinetic energy of the drops in natural rainfall. Since the early 70´s we have been developing and using different kinds of rainfall simulators, both at laboratory and field levels, and under tropical and Mediterranean soil and climate conditions, in flat and sloping lands. They have been mainly used to evaluate the relative effects of different land use and management, including different cropping systems, tillage practices, surface soil conditioning, surface covers, etc. on soil water infiltration, on runoff and on erosion. Our experience is that in any case it is impossible to reproduce the variable size distribution and terminal velocity of raindrops, and the variable changes in intensity of natural storms, under a particular climate condition. In spite of this, with the use of rainfall simulators it is possible to obtain very good information, which if it is properly interpreted in relation to each particular condition (land and crop management, rainfall characteristics, measurement conditions, etc.) may be used as one of the parameters for deducing and modelling soil water balance and soil moisture regime under different land use and management and variable climate conditions. Due to the possibility for a better control of the intensity of simulated rainfall and of the size of water drops, and the possibility to make more repeated measurements under very variable soil and land conditions, both in the laboratory and specially in the field, the better results have been

  4. Radiative modelling by the zonal method and WSGG model in inhomogeneous axisymmetric cylindrical enclosure

    International Nuclear Information System (INIS)

    Méchi, Rachid; Farhat, Habib; Said, Rachid

    2016-01-01

    Nongray radiation calculations are carried out for a case problem available in the literature. The problem is a non-isothermal and inhomogeneous CO 2 -H 2 O- N 2 gas mixture confined within an axisymmetric cylindrical furnace. The numerical procedure is based on the zonal method associated with the weighted sum of gray gases (WSGG) model. The effect of the wall emissivity on the heat flux losses is discussed. It is shown that this property affects strongly the furnace efficiency and that the most important heat fluxes are those leaving through the circumferential boundary. The numerical procedure adopted in this work is found to be effective and may be relied on to simulate coupled turbulent combustion-radiation in fired furnaces. (paper)

  5. Improved Model Calibration From Genetically Adaptive Multi-Method Search

    Science.gov (United States)

    Vrugt, J. A.; Robinson, B. A.

    2006-12-01

    Evolutionary optimization is a subject of intense interest in many fields of study, including computational chemistry, biology, bio-informatics, economics, computational science, geophysics and environmental science. The goal is to determine values for model parameters or state variables that provide the best possible solution to a predefined cost or objective function, or a set of optimal trade-off values in the case of two or more conflicting objectives. However, locating optimal solutions often turns out to be painstakingly tedious, or even completely beyond current or projected computational capacity. Here we present an innovative concept of genetically adaptive multi-algorithm optimization. Benchmark results show that this new optimization technique is significantly more efficient than current state-of-the-art evolutionary algorithms, approaching a factor of ten improvement for the more complex, higher dimensional optimization problems. Our new algorithm provides new opportunities for solving previously intractable environmental model calibration problems.

  6. Autonomous guided vehicles methods and models for optimal path planning

    CERN Document Server

    Fazlollahtabar, Hamed

    2015-01-01

      This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...

  7. Zero modes method and form factors in quantum integrable models

    Directory of Open Access Journals (Sweden)

    S. Pakuliak

    2015-04-01

    Full Text Available We study integrable models solvable by the nested algebraic Bethe ansatz and possessing GL(3-invariant R-matrix. Assuming that the monodromy matrix of the model can be expanded into series with respect to the inverse spectral parameter, we define zero modes of the monodromy matrix entries as the first nontrivial coefficients of this series. Using these zero modes we establish new relations between form factors of the elements of the monodromy matrix. We prove that all of them can be obtained from the form factor of a diagonal matrix element in special limits of Bethe parameters. As a result we obtain determinant representations for form factors of all the entries of the monodromy matrix.

  8. The spectral cell method in nonlinear earthquake modeling

    Science.gov (United States)

    Giraldo, Daniel; Restrepo, Doriam

    2017-12-01

    This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.

  9. Modeling of Airfoil Trailing Edge Flap with Immersed Boundary Method

    DEFF Research Database (Denmark)

    Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2011-01-01

    The present work considers incompressible flow over a 2D airfoil with a deformable trailing edge. The aerodynamic characteristics of an airfoil with a trailing edge flap is numerically investigated using computational fluid dynamics. A novel hybrid immersed boundary (IB) technique is applied...... to simulate the moving part of the trailing edge. Over the main fixed part of the airfoil the Navier-Stokes (NS) equations are solved using a standard body-fitted finite volume technique whereas the moving trailing edge flap is simulated with the immersed boundary method on a curvilinear mesh. The obtained...... results show that the hybrid approach is an efficient and accurate method for solving turbulent flows past airfoils with a trailing edge flap and flow control using trailing edge flap is an efficient way to regulate the aerodynamic loading on airfoils....

  10. Model-independent determination of dissociation energies: method and applications

    International Nuclear Information System (INIS)

    Vogel, Manuel; Hansen, Klavs; Herlert, Alexander; Schweikhard, Lutz

    2003-01-01

    A number of methods are available for the purpose of extracting dissociation energies of polyatomic particles. Many of these techniques relate the rate of disintegration at a known excitation energy to the value of the dissociation energy. However, such a determination is susceptible to systematic uncertainties, mainly due to the unknown thermal properties of the particles and the potential existence of 'dark' channels, such as radiative cooling. These problems can be avoided with a recently developed procedure, which applies energy-dependent reactions of the decay products as an uncalibrated thermometer. Thus, it allows a direct measurement of dissociation energies, without any assumption on properties of the system or on details of the disintegration process. The experiments have been performed in a Penning trap, where both rate constants and branching ratios have been measured. The dissociation energies determined with different versions of the method yield identical values, within a small uncertainty

  11. Method of modeling transmissions for real-time simulation

    Science.gov (United States)

    Hebbale, Kumaraswamy V.

    2012-09-25

    A transmission modeling system includes an in-gear module that determines an in-gear acceleration when a vehicle is in gear. A shift module determines a shift acceleration based on a clutch torque when the vehicle is shifting between gears. A shaft acceleration determination module determines a shaft acceleration based on at least one of the in-gear acceleration and the shift acceleration.

  12. A narrative review of research impact assessment models and methods.

    Science.gov (United States)

    Milat, Andrew J; Bauman, Adrian E; Redman, Sally

    2015-03-18

    Research funding agencies continue to grapple with assessing research impact. Theoretical frameworks are useful tools for describing and understanding research impact. The purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing policy and practice impacts of public health research. The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013. Review search terms included 'research impact', 'policy and practice', 'intervention research', 'translational research', 'health promotion', and 'public health'. The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact. The review was conducted in two phases: initially, abstracts were retrieved and assessed against the review criteria followed by the retrieval and assessment of full papers against review criteria. Thirty one primary studies and one systematic review met the review criteria, with 88% of studies published since 2006. Studies comprised assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies had an explicit focus on assessing impacts of health promotion or public health research and one had a specific focus on intervention research impact assessment. A total of 16 different impact assessment models were identified, with the 'payback model' the most frequently used conceptual framework. Typically, impacts were assessed across multiple dimensions using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The vast majority of studies relied on principal investigator

  13. Design and implementation of an indoor modeling method through crowdsensing

    OpenAIRE

    Reichelt, Daniel

    2017-01-01

    While automatic modeling and mapping of outdoor environments is well-established, the indoor equivalent of automated generation of building floor plans poses a challenge. In fact, outdoor localization is commonly available and inexpensive through the existing satellite positioning systems, such as GPS and Galileo. However, these technologies are not applicable in indoor environments, since a direct line of sight to the satellites, orbiting the globes, is required. As a substitution, the techn...

  14. 1-g model loading tests: methods and results

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav

    1999-01-01

    Roč. 2, č. 4 (1999), s. 371-381 ISSN 1436-6517. [Int.Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundation * model tests * sandy subsoil * bearing capacity * subsoil failure * volume deformation Subject RIV: JM - Building Engineering

  15. Models, measures, and methods: variability in aging research.

    Science.gov (United States)

    Miller, Edward Alan; Weissert, William G

    2003-01-01

    The purpose of this paper is to review the models and measurement strategies used in studies evaluating the predictors of nursing home placement, hospitalization, functional impairment and mortality. To do so we examine 167 multivariate equations abstracted from 78 longitudinal studies published between 1985 and 1998 that assess the risk factors of one or more adverse outcomes. We find that both comparatively straightforward concepts such as age and income and widely used scales such as activities of daily living and the short-portable mental status questionnaire display considerable variability in operationalization and coding. We also find that few researchers employ explicit conceptual models to assist with variable choice, while some predictors-demographics, physical and cognitive functioning-were studied much more frequently than others-service, market, and policy characteristics. Variability in measurement highlights the lack of standardization in this area of aging research and leaves room for improvements in validity and reliability. Limited use of conceptual models has led researchers to include some predictors in their analyses to the exclusion of others.

  16. Advanced physical models and monitoring methods for in situ bioremediation

    Energy Technology Data Exchange (ETDEWEB)

    Simon, K.; Chalmer, P.

    1996-05-30

    Numerous reports have indicated that contamination at DOE facilities is widespread and pervasive. Existing technology is often too costly or ineffective in remediating these contamination problems. An effective method to address one class of contamination, petroleum hydrocarbons, is in situ bioremediation. This project was designed to provide tools and approaches for increasing the reliability of in situ bioremediation. An example of the recognition within DOE for developing these tools is in the FY-1995 Technology Development Needs Summary of the Office of Technology Development of the US DOE. This document identifies specific needs addressed by this research. For example, Section 3.3 Need Statement IS-3 identifies the need for a {open_quotes}Rapid method to detect in situ biodegradation products.{close_quotes} Also, BW-I identifies the need to recognize boundaries between clean and contaminated materials and soils. Metabolic activity could identify these boundaries. Measuring rates of in situ microbial activity is critical to the fundamental understanding of subsurface microbiology and in selecting natural attenuation as a remediation option. Given the complexity and heterogeneity of subsurface environments, a significant cost incurred during bioremediation is the characterization of microbial activity, in part because so many intermediate end points (biomass, gene frequency, laboratory measurements of activity, etc.) must be used to infer in situ activity. A fast, accurate, real-time, and cost-effective method is needed to determine success of bioremediation at DOE sites.

  17. A Data Pre-Processing Model for the Topsis Method

    Directory of Open Access Journals (Sweden)

    Kobryń Andrzej

    2016-12-01

    Full Text Available TOPSIS is one of the most popular methods of multi-criteria decision making (MCDM. Its fundamental role is the establishment of chosen alternatives ranking based on their distance from the ideal and negative-ideal solution. There are three primary versions of the TOPSIS method distinguished: classical, interval and fuzzy, where calculation algorithms are adjusted to the character of input rating decision-making alternatives (real numbers, interval data or fuzzy numbers. Various, specialist publications present descriptions on the use of particular versions of the TOPSIS method in the decision-making process, particularly popular is the fuzzy version. However, it should be noticed, that depending on the character of accepted criteria – rating of alternatives can have a heterogeneous character. The present paper suggests the means of proceeding in the situation when the set of criteria covers characteristic criteria for each of the mentioned versions of TOPSIS, as a result of which the rating of the alternatives is vague. The calculation procedure has been illustrated by an adequate numerical example.

  18. New Methods for Air Quality Model Evaluation with Satellite Data

    Science.gov (United States)

    Holloway, T.; Harkey, M.

    2015-12-01

    Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI

  19. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  20. Development of mathematic model for coffee decaffeination with leaching method

    Directory of Open Access Journals (Sweden)

    Sukrisno Widyotomo

    2011-08-01

    Full Text Available A simple mathematic model for caffeine kinetic description during the extraction process (leaching of coffee bean was developed. A non­steady diffusion equation coupled with a macroscopic mass transfer equation for solvent was developed and them solved analytically. The kinetic of caffeine extraction from coffee bean is depend on initial caffeine content, final caffeine content, caffeine content at certain time, mass­transfer coefficient, solvent volume, surface area of coffee beans, process time, radius of coffee bean, leaching rate of caffeine, caffeine diffusivity and a are constan, solvent concentration, activation energy, temperature absolute and gas constant. Caffeine internal mass diffusivity was estimated by fitting the model to an experiment using acetic acid and liquid waste of cocoa beans fermentation. The prediction equation for leaching rate of caffeine in coffee beans has been found. It was found that Dk (m2/sec=1.345x10­7—4.1638x10­7, and kL (m/sec=2.445x10­5—5.551x10­5 by acetic acid as solvent depended on temperature and solvent concentration. The prediction equation for length of time to reduce initial caffeine content to certain concentration in coffee beans has been developed, Caffeine diffusivity (Dk and mass­transfer coefficient (kL was found respectively 1.591x 10­7—2.122x10­7 m2/sec and 4.897x10­5—6.529x10­5 m/sec using liquid waste of cocoa bean fermentation as solvent which depend on temperature and solvent concentration. Key words: Coffee, caffeine, decaffeination, leaching, mathematic model.