WorldWideScience

Sample records for model methods forty

  1. Forty years of Fanger's model of thermal comfort: comfort for all?

    Science.gov (United States)

    van Hoof, J

    2008-06-01

    The predicted mean vote (PMV) model of thermal comfort, created by Fanger in the late 1960s, is used worldwide to assess thermal comfort. Fanger based his model on college-aged students for use in invariant environmental conditions in air-conditioned buildings in moderate thermal climate zones. Environmental engineering practice calls for a predictive method that is applicable to all types of people in any kind of building in every climate zone. In this publication, existing support and criticism, as well as modifications to the PMV model are discussed in light of the requirements by environmental engineering practice in the 21st century in order to move from a predicted mean vote to comfort for all. Improved prediction of thermal comfort can be achieved through improving the validity of the PMV model, better specification of the model's input parameters, and accounting for outdoor thermal conditions and special groups. The application range of the PMV model can be enlarged, for instance, by using the model to assess the effects of the thermal environment on productivity and behavior, and interactions with other indoor environmental parameters, and the use of information and communication technologies. Even with such modifications to thermal comfort evaluation, thermal comfort for all can only be achieved when occupants have effective control over their own thermal environment. The paper treats the assessment of thermal comfort using the PMV model of Fanger, and deals with the strengths and limitations of this model. Readers are made familiar to some opportunities for use in the 21st-century information society.

  2. Forty years of Fanger's model of thermal comfort: Comfort for all?

    NARCIS (Netherlands)

    Hoof, van J.

    2008-01-01

    The predicted mean vote (PMV) model of thermal comfort, created by Fanger in the late 1960s, is used worldwide to assess thermal comfort. Fanger based his model on college-aged students for use in invariant environmental conditions in air-conditioned buildings in moderate thermal climate zones.

  3. Forty years of 9Sr in situ migration: importance of soil characterization in modeling transport phenomena

    International Nuclear Information System (INIS)

    Fernandez, J.M.; Piault, E.; Macouillard, D.; Juncos, C.

    2006-01-01

    In 1960 experiments were carried out on the transfer of 9 Sr between soil, grapes and wine. The experiments were conducted in situ on a piece of land limited by two control strips. The 9 Sr migration over the last 40 years was studied by performing radiological and physico-chemical characterizations of the soil on eight 70 cm deep cores. The vertical migration modeling of 9 Sr required the definition of a triple layer conceptual model integrating the rainwater infiltration at constant flux as the only external factor of influence. Afterwards the importance of a detailed soil characterization for modeling was discussed and satisfactory simulation of the 9 Sr vertical transport was obtained and showed a calculated migration rate of about 1.0 cm year -1 in full agreement with the in situ measured values. The discussion was regarding some of the key parameters such as granulometry, organic matter content (in the Van Genuchten parameter determination), Kd and the efficient rainwater infiltration. Besides the experimental data, simplifying assumptions in modeling such as water-soil redistribution calculation and factual discontinuities in conceptual model were examined

  4. 26 CFR 7.48-2 - Election of forty-percent method of determining investment credit for movie and television films...

    Science.gov (United States)

    2010-04-01

    ... investment credit for movie and television films placed in service in a taxable year beginning before January... Election of forty-percent method of determining investment credit for movie and television films placed in... the Tax Reform Act of 1976 (90 Stat. 1595), taxpayers who placed movie or television films (here...

  5. Fabrication of FORTIS

    Science.gov (United States)

    McCandliss, Stephan R.; Fleming, Brian; Kaiser, Mary Elizabeth; Kruk, Jeffrey; Feldman, Paul D.; Kutyrev, Alexander S.; Li, Mary J.; Goodwin, Phillip A.; Rapchun, David; Lyness, Eric; Brown, Ari D.; Moseley, Harvey; Siegmund, Oswald; Vallerga, John

    2010-07-01

    The Johns Hopkins University sounding rocket group is building the Far-ultraviolet Off Rowland-circle Telescope for Imaging and Spectroscopy (FORTIS), which is a Gregorian telescope with rulings on the secondary mirror. FORTIS will be launched on a sounding rocket from White Sand Missile Range to study the relationship between Lyman alpha escape and the local gas-to-dust ratio in star forming galaxies with non-zero redshifts. It is designed to acquire images of a 30' x 30' field and provide fully redundant "on-the-fly" spectral acquisition of 43 separate targets in the field with a bandpass of 900 - 1800 Angstroms. FORTIS is an enabling scientific and technical activity for future cutting edge far- and near-uv survey missions seeking to: search for Lyman continuum radiation leaking from star forming galaxies, determine the epoch of He II reionization and characterize baryon acoustic oscillations using the Lyman forest. In addition to the high efficiency "two bounce" dual-order spectro-telescope design, FORTIS incorporates a number of innovative technologies including: an image dissecting microshutter array developed by GSFC; a large area (~ 45 mm x 170 mm) microchannel plate detector with central imaging and "outrigger" spectral channels provided by Sensor Sciences; and an autonomous targeting microprocessor incorporating commercially available field programable gate arrays. We discuss progress to date in developing our pathfinder instrument.

  6. Energy Return on Investment (EROI) for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production

    Science.gov (United States)

    Brandt, Adam R.; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah

    2015-01-01

    Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services. PMID:26695068

  7. Energy Return on Investment (EROI for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production.

    Directory of Open Access Journals (Sweden)

    Adam R Brandt

    Full Text Available Studies of the energy return on investment (EROI for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs, nor does it include other indirect energy uses such as labor or services.

  8. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  9. Scattered light characterization of FORTIS

    Science.gov (United States)

    McCandliss, Stephan R.; Carter, Anna; Redwine, Keith; Teste, Stephane; Pelton, Russell; Hagopian, John; Kutyrev, Alexander; Li, Mary J.; Moseley, S. Harvey

    2017-08-01

    We describe our efforts to build a Wide-Field Lyman alpha Geocoronal simulator (WFLaGs) for characterizing the end-to-end sensitivity of FORTIS (Far-UV Off Rowland-circle Telescope for Imaging and Spectroscopy) to scattered Lyman α emission from outside of the nominal (1/2 degree)2 field-of-view. WFLaGs is a 50 mm diameter F/1 aluminum parabolic collimator fed by a hollow cathode discharge lamp with a 80 mm clear MgF2 window housed in a vacuum skin. It creates emission over a 10 degree FOV. WFLaGS will allow us to validate and refine a recently developed scattered light model and verify our scatter light mitigation strategies, which will incorporate low scatter baffle materials, and possibly 3-d printed light traps, covering exposed scatter centers. We present measurements of scattering intensity of Lyman alpha as a function of angle with respect to the specular reflectance direction for several candidate baffle materials. Initial testing of WFLaGs will be described.

  10. Fabrication and calibration of FORTIS

    Science.gov (United States)

    Fleming, Brian T.; McCandliss, Stephan R.; Kaiser, Mary Elizabeth; Kruk, Jeffery; Feldman, Paul D.; Kutyrev, Alexander S.; Li, Mary J.; Rapchun, David A.; Lyness, Eric; Moseley, S. H.; Siegmund, Oswald; Vallerga, John; Martin, Adrian

    2011-09-01

    The Johns Hopkins University sounding rocket group is entering the final fabrication phase of the Far-ultraviolet Off Rowland-circle Telescope for Imaging and Spectroscopy (FORTIS); a sounding rocket borne multi-object spectro-telescope designed to provide spectral coverage of 43 separate targets in the 900 - 1800 Angstrom bandpass over a 30' x 30' field-of- view. Using "on-the-fly" target acquisition and spectral multiplexing enabled by a GSFC microshutter array, FORTIS will be capable of observing the brightest regions in the far-UV of nearby low redshift (z ~ 0.002 - 0.02) star forming galaxies to search for Lyman alpha escape, and to measure the local gas-to-dust ratio. A large area (~ 45 mm x 170 mm) microchannel plate detector built by Sensor Sciences provides an imaging channel for targeting flanked by two redundant spectral outrigger channels. The grating is ruled directly onto the secondary mirror to increase efficiency. In this paper, we discuss the recent progress made in the development and fabrication of FORTIS, as well as the results of early calibration and characterization of our hardware, including mirror/grating measurements, detector performance, and early operational tests of the microshutter arrays.

  11. Forty Years of Excellence, 1946 to 1986. Volume XXXVIII

    National Research Council Canada - National Science Library

    1986-01-01

    ... at providing the base for future technology. But, of more lasting importance, ONR developed policies and procedures forty years ago which became the organizational models for the National Science Foundation and other research agencies involving...

  12. Forty Thousand Years of Advertisement

    Directory of Open Access Journals (Sweden)

    Konstantin Lidin

    2006-05-01

    Full Text Available The roots of advertisement are connected with reclamations, claims and arguments. No surprise that many people treat it with distrust, suspicion and irritation.Nobody loves advertisement (except its authors and those who order it, nobody watches it, everybody despises it and get annoyed because of it. But newspapers, magazines, television and city economy in general cannot do without it. One keeps on arguing whether to prohibit advertisement, to restrict its expansion, to bring in stricter regulations on advertisement…If something attracts attention, intrigues, promises to make dreams true and arouses desire to join - it should be considered as advertisement. This definition allows saying with no doubts: yes, advertisement did existed in the most ancient strongest cultures. Advertisement is as old as the humane civilization. There have always been the objects to be advertised, and different methods appeared to reach those goals.Advertisement techniques and topics appear, get forgotten and appear again in other places and other times. Sometimes the author of advertisement image has no idea about his forerunners and believes he is the discoverer. A skillful designer with high level of professionalism deliberately uses images from past centuries. The professional is easily guided by historical prototypes.But there is another type of advertisement, its prototypes cannot be found in museums. It does not suppose any respect, because it is built on scornful attitude towards the spectator.However, basically the advertisement is made by professional designers, and in this case ignorance is inadmissible. Even if we many times appeal to Irkutsk designers to work on raising their cultural level of advertisements, anyhow, orders will be always made by those who pay. Unless Its Majesty Ruble stands for Culture, those appeals are of no use.

  13. Getting started with FortiGate

    CERN Document Server

    Fabbri, Rosato

    2013-01-01

    This book is a step-by-step tutorial that will teach you everything you need to know about the deployment and management of FortiGate, including high availability, complex routing, various kinds of VPN working, user authentication, security rules and controls on applications, and mail and Internet access.This book is intended for network administrators, security managers, and IT pros. It is a great starting point if you have to administer or configure a FortiGate unit, especially if you have no previous experience. For people that have never managed a FortiGate unit, the book helpfully walks t

  14. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  15. TRAC methods and models

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Bott, T.F.

    1981-01-01

    The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents

  16. Calibration and flight qualification of FORTIS

    Science.gov (United States)

    Fleming, Brian T.; McCandliss, Stephan R.; Redwine, Keith; Kaiser, Mary Elizabeth; Kruk, Jeffery; Feldman, Paul D.; Kutyrev, Alexander S.; Li, Mary J.; Moseley, S. H.; Siegmund, Oswald; Vallerga, John; Martin, Adrian

    2013-09-01

    The Johns Hopkins University sounding rocket group has completed the assembly and calibration of the Far-ultraviolet Off Rowland-circle Telescope for Imaging and Spectroscopy (FORTIS); a sounding rocket borne multi-object spectro-telescope designed to provide spectral coverage of up to 43 separate targets in the 900 - 1800 Angstrom bandpass over a 30' x 30' field-of-view. FORTIS is capable of selecting the far-UV brightest regions of the target area by utilizing an autonomous targeting system. Medium resolution (R ~ 400) spectra are recorded in redundant dual-order spectroscopic channels with ~40 cm2 of effective area at 1216 Å. The maiden launch of FORTIS occurred on May 10, 2013 out of the White Sands Missile Range, targeting the extended spiral galaxy M61 and nearby companion NGC 4301. We report on the final flight calibrations of the instrument, as well as the flight results.

  17. The first forty years, 1947-1987

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, M.S. (ed.); Cohen, A.; Petersen, B.

    1987-01-01

    This report commemorates the fortieth anniversary of Brookhaven National Laboratory by representing a historical overview of research at the facility. The chapters of the report are entitled: The First Forty Years, Brookhaven: A National Resource, Fulfilling a Mission - Brookhaven's Mighty Machines, Marketing the Milestones in Basic Research, Meeting National Needs, Making a Difference in Everyday Life, and Looking Forward.

  18. The first forty years, 1947-1987

    International Nuclear Information System (INIS)

    Rowe, M.S.; Cohen, A.; Petersen, B.

    1987-01-01

    This report commemorates the fortieth anniversary of Brookhaven National Laboratory by representing a historical overview of research at the facility. The chapters of the report are entitled: The First Forty Years, Brookhaven: A National Resource, Fulfilling a Mission - Brookhaven's Mighty Machines, Marketing the Milestones in Basic Research, Meeting National Needs, Making a Difference in Everyday Life, and Looking Forward

  19. Forty cases of maxillary sinus carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Go; Yamada, Shoichiro; Sawatsubashi, Motohiro; Miyazaki, Junji; Tsuda, Kuniyoshi; Inokuchi, Akira [Saga Medical School (Japan)

    2002-01-01

    Forty patients with squamous cell carcinoma in the maxillary sinus were investigated between 1989 and 1999. They consisted of 28 males and 12 females. Their ages ranged from 18 to 84 years (mean 62 years). According to the 1987 UICC TNM classification system, 3 patients were classified as stage II, 3 were stage III and 34 were stage IV. The overall three-year and five-year survival rates were 52% and 44%, respectively. Local recurrence was observed in 11 stage IV cases and 10 of them were not controlled. For further improving the prognosis of such patients, new techniques such as skull base surgery, super selective intraarterial chemotherapy, and concurrent chemo-radiation should be included in the treatment regimen. (author)

  20. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  1. Models and methods in thermoluminescence

    International Nuclear Information System (INIS)

    Furetta, C.

    2005-01-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  2. Models and methods in thermoluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)

    2005-07-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  3. History, Archaeology and the Bible Forty Years after "Historicity"

    DEFF Research Database (Denmark)

    In History, Archaeology and the Bible Forty Years after “Historicity”, Hjelm and Thompson argue that a ‘crisis’ broke in the 1970s, when several new studies of biblical history and archaeology were published, questioning the historical-critical method of biblical scholarship. The crisis formed...... articles from some of the field’s best scholars with comprehensive discussion of historical, archaeological, anthropological, cultural and literary approaches to the Hebrew Bible and Palestine’s history. The essays question: “How does biblical history relate to the archaeological history of Israel...

  4. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  5. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  6. ADOxx Modelling Method Conceptualization Environment

    Directory of Open Access Journals (Sweden)

    Nesat Efendioglu

    2017-04-01

    Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.

  7. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  8. Iterative method for Amado's model

    International Nuclear Information System (INIS)

    Tomio, L.

    1980-01-01

    A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt

  9. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  10. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  11. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  12. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  13. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  14. Evaluation of Forty-Nine Patients with Abdominal Tuberculosis

    Directory of Open Access Journals (Sweden)

    Murat Kilic

    2014-12-01

    Full Text Available Aim: Abdominal tuberculosis is an uncommon form of extrapulmonary infection. In this study, we aimed to highlight the nonspecific clinical presentations and diagnostic difficulties of abdominal tuberculosis. Material and Method: Clinical features, diagnostic methods, and the therapeutic outcomes of 49 patients diagnosed as abdominal tuberculosis between 2003 and 2014 were retrospectively analyzed. Results: The patients were classified into four subgroups including peritoneal (28, nodal (14, intestinal (5, and solid organ tuberculosis (2. The most frequent symptoms were abdominal pain, abdominal distention and fatique. Ascites appeared to be the most frequent clinical finding. Ascites and enlarged abdominal lymph nodes were the most frequent findings on ultrasonography and tomography. Diagnosis of abdominal tuberculosis was mainly depended on histopathology of ascitic fluid and biopsies from peritoneum, abdominal lymph nodes or colonoscopic materials. Forty patients healed with standart 6-month therapy while extended treatment for 9-12 months was needed in 8 whom had discontinued drug therapy and had persistent symptoms and signs. One patient died within the treatment period due to disseminated infection. Discussion: The diagnosis of abdominal tuberculosis is often difficult due to diverse clinical presentations. The presence of ascites, personal/familial/contact history of tuberculosis, and coexisting active extraabdominal tuberculosis are the most significant marks in diagnosis. Diagnostic laparoscopy and tissue sampling seem to be the best diagnostic approach for abdominal tuberculosis.

  15. Network modelling methods for FMRI.

    Science.gov (United States)

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Canada's uranium future, based on forty years of development

    International Nuclear Information System (INIS)

    Aspin, N.; Dakers, R.G.

    1982-09-01

    Canada's role as a major supplier of uranium has matured through the cyclical markets of the past forty years. Present resource estimates would support a potential production capability by the late 1980s 50 per cent greater than the peak production of 12 200 tonnes uranium in 1959. New and improved exploration techniques are being developed as uranium deposits become more difficult to discover. Radiometric prospecting of glacial boulder fields and the use of improved airborne and ground geophysical methods have contributed significantly to recent discoveries in Saskatchewan. Advances have also been made in the use of airborne radiometric reconnaissance, borehole logging, emanometry (radon and helium gas) and multi-element regional geochemistry techniques. Higher productivity in uranium mining has been achieved through automation and mechanization, while improved ventilation systems in conjunction with underground environmental monitoring have contributed to worker health and safety. Improved efficiency is being achieved in all phases of ore processing. Factors contributing to the increased time required to develop uranium mines and mills from a minimum of three years in the 1950s to the ten years typical of today, are discussed. The ability of Canada's uranium refinery to manufacture ceramic grade UO 2 powder to consistent standards has been a major factor in the successful development of high density natural uranium fuel for the CANDU (CANada Deuterium Uranium) reactor. Over 400 000 fuel assemblies have been manufactured by three companies. The refinery is undertaking a major expansion of its capacity

  17. Institute of fundamental research: forty years of research

    International Nuclear Information System (INIS)

    1986-01-01

    This document is aimed at illustration of forty years of fundamental research at CEA. It has not been conceived to give an exhaustive view of current research at IRF, but to give an illustration of these researches to non-specialists, and even to non-scientifists [fr

  18. French in Lesotho schools forty years after independence ...

    African Journals Online (AJOL)

    Most independent African states are now, like Lesotho, about forty years old. What has become of foreign languages such as French that once thrived under colonial rule albeit mostly in schools targeting non-indigenous learners? In Lesotho French seems to be the preserve of private or “international” schools. Can African ...

  19. Forty years of uranium resources, production and demand in perspective

    International Nuclear Information System (INIS)

    Price, R.; Barthel, F.; Blaise, J.R.; McMurray, J.

    2006-01-01

    The NEA has been collecting and analysing data on uranium for forty years. The data and experience provide a number of answers to the questions being asked today, as many countries begin to look at nuclear energy with renewed interest. In terms of uranium resources, the lessons of the past give confidence that uranium supply will remain adequate to meet demand. (authors)

  20. Identification of forty-five gene-derived polymorphic microsatellite ...

    Indian Academy of Sciences (India)

    [Chen M., Gao L., Zhang W., You H., Sun Q. and Chang Y. 2013 Identification of forty-five ... including Japan, China, Korea and fareastern Russia (Chang et al. 2009). Because of ... tory using the Illumina HiSeq 2000 Sequencing Technology.

  1. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  2. Energy models: methods and trends

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, A [Division of Energy Management and Planning, Verbundplan, Klagenfurt (Austria); Kuehner, R [IER Institute for Energy Economics and the Rational Use of Energy, University of Stuttgart, Stuttgart (Germany); Wohlgemuth, N [Department of Economy, University of Klagenfurt, Klagenfurt (Austria)

    1997-12-31

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of `energy models`, computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning. 2 figs., 19 refs.

  3. Energy models: methods and trends

    International Nuclear Information System (INIS)

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  4. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  5. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  6. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  7. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  8. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  9. Forty years of training program in the JAERI

    International Nuclear Information System (INIS)

    1998-03-01

    This report is to compile the past training program of researchers, engineers and regulatory members at the NuTEC (Nuclear Technology and Education Center) of Japan Atomic Energy Research Institute and the past basic seminars for the public, in addition to advice and perspective on the future program from relevant experts, in commemoration of the forty years of the NuTEC. It covers the past five years of educational courses and seminars in utilization of radioisotopes and nuclear energy for domestic and for international training provided at Tokyo and Tokai Education Centers and covers the activity of the Asia-Pacific nuclear technology transfer, including the activity of various committees and meetings. Especially, fifty six experts and authorities have contributed to the report with positive advice and perspective on the training program in the 21st century based on their reminiscences. (author)

  10. Coherence method of identifying signal noise model

    International Nuclear Information System (INIS)

    Vavrin, J.

    1981-01-01

    The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)

  11. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  12. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  13. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  14. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  15. Review of various dynamic modeling methods and development of an intuitive modeling method for dynamic systems

    International Nuclear Information System (INIS)

    Shin, Seung Ki; Seong, Poong Hyun

    2008-01-01

    Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing Reliability Graph with General Gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables

  16. Forty years of experience on closed-cycle gas turbines

    International Nuclear Information System (INIS)

    Keller, C.

    1978-01-01

    Forty years of experience on closed-cycle gas turbines (CCGT) is emphasized to substantiate the claim that this prime-mover technology is well established. European fossil-fired plants with air as the working fluid have been individually operated over 100,000 hours, have demonstrated very high availability and reliability, and have been economically successful. Following the initial success of the small air closed cycle gas turbine plants, the next step was the exploitation of helium as the working fluid for plants above 50 MWe. The first fossil fired combined power and heat plant at Oberhausen, using a helium turbine, plays an important role for future nuclear systems and this is briefly discussed. The combining of an HTGR and an advanced proven power conversion system (CCGT) represents the most interesting and challenging project. The key to acceptance of the CCGT in the near term is the introduction of a small nuclear cogeneration plant (100 to 300 MWe) that utilizes the waste heat, demonstrating a very high fuel utilization efficiency: aspects of such a plant are outlined. (author)

  17. [Infertility over forty: Pros and cons of IVF].

    Science.gov (United States)

    Belaisch-Allart, J; Maget, V; Mayenga, J-M; Grefenstette, I; Chouraqui, A; Belaid, Y; Kulski, O

    2015-09-01

    The population attempting pregnancy and having babies is ageing. The declining fertility potential and the late age of motherhood are increasing significantly the number of patients over forty consulting infertility specialists. Assisted reproductive technologies (ART) cannot compensate the natural decline in fertility with age. In France, in public hospital, ART is free of charge for women until 43 years, over 43, social insurance does not reimburse ART. Hence, 43 years is the usual limit, but between 40 and 42 is ART useful? The answer varies according to physicians, couples or society. On medical level, the etiology of the infertility must be taken into account. If there is an explanation to infertility (male or tubal infertility) ART is better than abstention. If the infertility is only due to age the question is raised. In France, the reimbursement by the society of a technique with very low results is discussed. However efficacy is not absolutely compulsory in Medicine. On the opposite to give false hopes may be discussed too. To obtain a reasonable consensus is rather difficult. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  18. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  19. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  20. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  1. A survey of real face modeling methods

    Science.gov (United States)

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  2. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  3. Habitat evaluation for outbreak of Yangtze voles (Microtus fortis) and management implications.

    Science.gov (United States)

    Xu, Zhenggang; Zhao, Yunlin; Li, Bo; Zhang, Meiwen; Shen, Guo; Wang, Yong

    2015-05-01

    Rodent pests severely damage agricultural crops. Outbreak risk models of rodent pests often do not include sufficient information regarding geographic variation. Habitat plays an important role in rodent-pest outbreak risk, and more information about the relationship between habitat and crop protection is urgently needed. The goal of the present study was to provide an outbreak risk map for the Dongting Lake region and to understand the relationship between rodent-pest outbreak variation and habitat distribution. The main rodent pests in the Dongting Lake region are Yangtze voles (Microtus fortis). These pests cause massive damage in outbreak years, most notably in 2007. Habitat evaluation and ecological details were obtained by analyzing the correlation between habitat suitability and outbreak risk, as indicated by population density and historical events. For the source-sink population, 96.18% of Yangtze vole disaster regions were covered by a 10-km buffer zone of suitable habitat in 2007. Historical outbreak frequency and peak population density were significantly correlated with the proportion of land covered by suitable habitat (r = 0.68, P = 0.04 and r = 0.76, P = 0.03, respectively). The Yangtze vole population tends to migrate approximately 10 km in outbreak years. Here, we propose a practical method for habitat evaluation that can be used to create integrated pest management plans for rodent pests when combined with basic information on the biology, ecology and behavior of the target species. © 2014 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.

  4. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  5. Taekwondo training improves balance in volunteers over forty.

    Directory of Open Access Journals (Sweden)

    Gaby ePons Van Dijk

    2013-03-01

    Full Text Available AbstractBalance deteriorates with age, and may eventually lead to falling accidents which may threaten independent living. As Taekwondo contains various highly dynamic movement patterns, taekwondo practice may sustain or improve balance. Therefore, in 24 middle-aged healthy volunteers (40-71 year we investigated effects of age-adapted taekwondo training of one hour a week during one year on various balance parameters, such as: motor orientation ability (primary outcome measure, postural and static balance test, single leg stance, one leg hop test, and a questionnaire.Motor orientation ability significantly increased in favor of the antero-posterior direction with a difference of 0.62 degrees towards anterior compared to pre-training measurement, when participants corrected the tilted platform rather towards the posterior direction; female gender being an independent outcome predictor. On postural balance measurements sway path improved in all 19 participants, with a median of 9.3 mm/sec (range 0.71-45.86, and sway area in 15 participants with 4.2 mm²/sec (range 17.39-1.22. Static balance improved with an average of 5.34 seconds for the right leg, and with almost 4 seconds for the left. Median single leg stance duration increased in 17 participants with 5 seconds (range 1-16, and in 13 participants with 8 seconds (range 1-18. The average one leg hop test distance increased (not statistically significant with 9.5 cm. The questionnaire reported a better ‘ability to maintain balance’ in sixteen.In conclusion, our data suggest that age-adapted Taekwondo training improves various aspects of balance control in healthy people over the age of forty.

  6. Forty years of mutation breeding in Japan. Research and fruits

    International Nuclear Information System (INIS)

    Yamaguchi, Isao

    2003-01-01

    The radiation source used for breeding in the early years was mainly X rays. After the 2nd World War, gamma ray sources such as 60 Co and 137 Cs came to take a leading role in radiation breeding. The institute of Radiation Breeding (IRB) of the Ministry of Agriculture, Forestry and Fisheries (MAFF) was established on April 16, 1960. A gamma field with 2000Ci of a 60 Co source, the main irradiation facility of the IRB, was installed to study the genetically responses of crop plants to chronic exposures of ionizing radiation and their practical application to plant breeding. This paper consisted of 'forty years of research on radiobiology and mutation breeding in Japan', 'topics of mutation breeding research in IRB', 'outline of released varieties by mutation breeding' and 'future of mutation breeding'. The number of varieties released by the direct use of induced mutation in Japan amounts to 163 as of November 2001. Crops in which mutant varieties have been released range widely: rice and other cereals, industrial crops, forage crops, vegetables, ornamentals, mushrooms and fruit trees, the number of which reaches 48. The number of mutant varieties is highest (31) in chrysanthemum, followed by 22 in rice and 13 in soybean. By the indirect use of mutants, a total of 15 varieties of wheat, barley, soybean, mat rush and tomato have been registered by MAFF. Recent advances in biotechnological techniques have made it possible to determine DNA sequences of mutant genes. Accumulating information of DNA sequences and other molecular aspects of many mutant genes will throw light on the mechanisms of mutation induction and develop a new field of mutation breeding. (S.Y.)

  7. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  8. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  9. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  10. Diffuse interface methods for multiphase flow modeling

    International Nuclear Information System (INIS)

    Jamet, D.

    2004-01-01

    Full text of publication follows:Nuclear reactor safety programs need to get a better description of some stages of identified incident or accident scenarios. For some of them, such as the reflooding of the core or the dryout of fuel rods, the heat, momentum and mass transfers taking place at the scale of droplets or bubbles are part of the key physical phenomena for which a better description is needed. Experiments are difficult to perform at these very small scales and direct numerical simulations is viewed as a promising way to give new insight into these complex two-phase flows. This type of simulations requires numerical methods that are accurate, efficient and easy to run in three space dimensions and on parallel computers. Despite many years of development, direct numerical simulation of two-phase flows is still very challenging, mostly because it requires solving moving boundary problems. To avoid this major difficulty, a new class of numerical methods is arising, called diffuse interface methods. These methods are based on physical theories dating back to van der Waals and mostly used in materials science. In these methods, interfaces separating two phases are modeled as continuous transitions zones instead of surfaces of discontinuity. Since all the physical variables encounter possibly strong but nevertheless always continuous variations across the interfacial zones, these methods virtually eliminate the difficult moving boundary problem. We show that these methods lead to a single-phase like system of equations, which makes it easier to code in 3D and to make parallel compared to more classical methods. The first method presented is dedicated to liquid-vapor flows with phase-change. It is based on the van der Waals' theory of capillarity. This method has been used to study nucleate boiling of a pure fluid and of dilute binary mixtures. We discuss the importance of the choice and the meaning of the order parameter, i.e. a scalar which discriminates one

  11. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  12. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This

  13. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  14. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  15. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  16. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  17. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs

  18. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  19. Correlations between cutaneous malignant melanoma and other cancers: An ecological study in forty European countries

    Directory of Open Access Journals (Sweden)

    Pablo Fernandez-Crehuet Serrano

    2016-01-01

    Full Text Available Background: The presence of noncutaneous neoplasms does not seem to increase the risk of cutaneous malignant melanoma; however, it seems to be associated with the development of other hematological, brain, breast, uterine, and prostatic neoplasms. An ecological transversal study was conducted to study the geographic association between cutaneous malignant melanoma and 24 localizations of cancer in forty European countries. Methods: Cancer incidence rates were extracted from GLOBOCAN database of the International Agency for Research on Cancer. We analyzed the age-adjusted and gender-stratified incidence rates for different localizations of cancer in forty European countries and calculated their correlation using Pearson′s correlation test. Results: In males, significant correlations were found between cutaneous malignant melanoma with testicular cancer (r = 0.83 [95% confidence interval (CI: 0.68-0.89], myeloma (r = 0.68 [95% CI: 0.46-0.81], prostatic carcinoma (r = 0.66 [95% CI: 0.43-0.80], and non-Hodgkin lymphoma (NHL (r = 0.63 [95% CI: 0.39-0.78]. In females, significant correlations were found between cutaneous malignant melanoma with breast cancer (r = 0.80 [95% CI: 0.64-0.88], colorectal cancer (r = 0.72 [95% CI: 0.52-0.83], and NHL (r = 0.71 [95% CI: 0.50-0.83]. Conclusions: These correlations call to conduct new studies about the epidemiology of cancer in general and cutaneous malignant melanoma risk factors in particular.

  20. New method dynamically models hydrocarbon fractionation

    Energy Technology Data Exchange (ETDEWEB)

    Kesler, M.G.; Weissbrod, J.M.; Sheth, B.V. [Kesler Engineering, East Brunswick, NJ (United States)

    1995-10-01

    A new method for calculating distillation column dynamics can be used to model time-dependent effects of independent disturbances for a range of hydrocarbon fractionation. It can model crude atmospheric and vacuum columns, with relatively few equilibrium stages and a large number of components, to C{sub 3} splitters, with few components and up to 300 equilibrium stages. Simulation results are useful for operations analysis, process-control applications and closed-loop control in petroleum, petrochemical and gas processing plants. The method is based on an implicit approach, where the time-dependent variations of inventory, temperatures, liquid and vapor flows and compositions are superimposed at each time step on the steady-state solution. Newton-Raphson (N-R) techniques are then used to simultaneously solve the resulting finite-difference equations of material, equilibrium and enthalpy balances that characterize distillation dynamics. The important innovation is component-aggregation and tray-aggregation to contract the equations without compromising accuracy. This contraction increases the N-R calculations` stability. It also significantly increases calculational speed, which is particularly important in dynamic simulations. This method provides a sound basis for closed-loop, supervisory control of distillation--directly or via multivariable controllers--based on a rigorous, phenomenological column model.

  1. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  2. Forty Years of String Theory: Reflecting on the Foundations

    NARCIS (Netherlands)

    de Haro, S.; Dieks, D.G.B.J.; t Hooft, G.; Verlinde, E.

    2013-01-01

    The history of string theory started around 1970 when Nambu, Nielsen, and Susskind realized that Veneziano’s 1968 dual model, devised to explain the particle spectrum of the strong interactions, actually describes the properties of quantum mechanical strings. A few years later, QCD appeared as a

  3. Engineering design of systems models and methods

    CERN Document Server

    Buede, Dennis M

    2009-01-01

    The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...

  4. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a conflict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...

  5. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains......, and train routing problems, group them by railway network type, and discuss track allocation from a strategic, tactical, and operational level....... on a railway network entails allocating the track capacity of the network (or part thereof) over time in a con ict-free manner, all studies that model railway track allocation in some capacity are considered relevant. We hence survey work on the train timetabling, train dispatching, train platforming...

  6. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  7. Boundary element method for modelling creep behaviour

    International Nuclear Information System (INIS)

    Zarina Masood; Shah Nor Basri; Abdel Majid Hamouda; Prithvi Raj Arora

    2002-01-01

    A two dimensional initial strain direct boundary element method is proposed to numerically model the creep behaviour. The boundary of the body is discretized into quadratic element and the domain into quadratic quadrilaterals. The variables are also assumed to have a quadratic variation over the elements. The boundary integral equation is solved for each boundary node and assembled into a matrix. This matrix is solved by Gauss elimination with partial pivoting to obtain the variables on the boundary and in the interior. Due to the time-dependent nature of creep, the solution has to be derived over increments of time. Automatic time incrementation technique and backward Euler method for updating the variables are implemented to assure stability and accuracy of results. A flowchart of the solution strategy is also presented. (Author)

  8. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  9. Experimental modeling methods in Industrial Engineering

    Directory of Open Access Journals (Sweden)

    Peter Trebuňa

    2009-03-01

    Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".

  10. Mechanics, Models and Methods in Civil Engineering

    CERN Document Server

    Maceri, Franco

    2012-01-01

    „Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.

  11. The forward tracking, an optical model method

    CERN Document Server

    Benayoun, M

    2002-01-01

    This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.

  12. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  13. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  14. Effect of defuzzification method of fuzzy modeling

    Science.gov (United States)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  15. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  16. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  17. The accuracy of a method for printing three-dimensional spinal models.

    Directory of Open Access Journals (Sweden)

    Ai-Min Wu

    Full Text Available To study the morphology of the human spine and new spinal fixation methods, scientists require cadaveric specimens, which are dependent on donation. However, in most countries, the number of people willing to donate their body is low. A 3D printed model could be an alternative method for morphology research, but the accuracy of the morphology of a 3D printed model has not been determined.Forty-five computed tomography (CT scans of cervical, thoracic and lumbar spines were obtained, and 44 parameters of the cervical spine, 120 parameters of the thoracic spine, and 50 parameters of the lumbar spine were measured. The CT scan data in DICOM format were imported into Mimics software v10.01 for 3D reconstruction, and the data were saved in .STL format and imported to Cura software. After a 3D digital model was formed, it was saved in Gcode format and exported to a 3D printer for printing. After the 3D printed models were obtained, the above-referenced parameters were measured again.Paired t-tests were used to determine the significance, set to P0.800. The other ICC values were 0.600; none were <0.600.In this study, we provide a protocol for printing accurate 3D spinal models for surgeons and researchers. The resulting 3D printed model is inexpensive and easily obtained for spinal fixation research.

  18. Mathematical models and methods for planet Earth

    CERN Document Server

    Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta

    2014-01-01

    In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.

  19. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  20. FDTD method and models in optical education

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe

    2017-08-01

    In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.

  1. Free wake models for vortex methods

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)

    1997-08-01

    The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)

  2. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  3. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  4. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  5. A case study of systemic curricular reform: A forty-year history

    Science.gov (United States)

    Laubach, Timothy Alan

    What follows is a description of the development of a particular inquiry-based elementary school science curriculum program and how its theoretical underpinnings positively influenced a school district's (K-12) science program and also impacted district- and state-wide curriculum reform initiatives. The district's science program has evolved since the inception of the inquiry-based elementary school science curriculum reform forty years ago. Therefore, a historical case study, which incorporated grounded theory methodology, was used to convey the forty-year development of a science curriculum reform effort and its systemic influences. Data for this study were collected primarily through artifacts, such as technical and non-technical documents, and supported and augmented with interviews. Fifteen people comprised the interview consortium with professional responsibilities including (a) administrative roles, such as superintendents, assistant superintendents, principals, and curriculum consultants/coordinators; (b) classroom roles, such as elementary and secondary school teachers who taught science; (c) partnership roles, such as university faculty who collaborated with those in administrative and classroom positions within the district; and (d) the co-director of SCIS who worked with the SCIS trial center director. Data were analyzed and coded using the constant comparative method. The analysis of data uncovered five categories or levels in which the curriculum reform evolved throughout its duration. These themes are Initiation, Education, Implementation, Confirmation, and Continuation. These five categories lead to several working hypotheses that supported the sustaining and continuing of a K-12 science curriculum reform effort. These components are a committed visionary; a theory base of education; forums promoting the education of the theory base components; shared-decision making; a university-school partnership; a core group of committed educators and teachers

  6. Identification of forty cases with alkaptonuria in one village in Jordan.

    Science.gov (United States)

    Al-Sbou, Mohammed; Mwafi, Nesrin; Lubad, Mohammad Abu

    2012-12-01

    Alkaptonuria (AKU) is one of the four initially identified inborn errors of metabolism. The prevalence of AKU is unknown in Jordan. Therefore, a research project was started in April 2009 at the Faculty of Medicine/Mutah University in southern Jordan. The aims of the project were to identify people with AKU, to screen all family members with history of AKU, and to increase the awareness about the disease among health care professionals and the community in southern Jordan. Targeted family screening method was used to identify patients with AKU. In this paper, we present preliminary results of screening 17 families with history of AKU in a single village in southern region of Jordan. Forty cases with AKU were identified in this village (age range, 1-60 years). Early cases with AKU were diagnosed through out this study, two-third of patients (n = 28) were under the age of thirty. Interestingly, nine cases with AKU were identified in one family. Our experience suggests that for the identification of cases with AKU where consanguinity is common, the focus for screening should be extended to all family members. The prevalence of AKU among Jordanian is likely to be greater than the prevalence rates worldwide due to high rates of consanguineous marriages. Further studies and effective screening programs are needed to detect undiagnosed cases of AKU, to provide genetic counseling, and ultimately to prevent the occurrence of new cases of AKU in Jordan.

  7. Dynamic spatial panels : models, methods, and inferences

    NARCIS (Netherlands)

    Elhorst, J. Paul

    This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent

  8. Methods of Medical Guidelines Modelling in GLIF.

    Czech Academy of Sciences Publication Activity Database

    Buchtela, David; Anger, Z.; Peleška, Jan (ed.); Tomečková, Marie; Veselý, Arnošt; Zvárová, Jana

    2005-01-01

    Roč. 11, - (2005), s. 1529-1532 ISSN 1727-1983. [EMBEC'05. European Medical and Biomedical Conference /3./. Prague, 20.11.2005-25.11.2005] Institutional research plan: CEZ:AV0Z10300504 Keywords : medical guidelines * knowledge modelling * GLIF model Subject RIV: BD - Theory of Information

  9. Fluid Methods for Modeling Large, Heterogeneous Networks

    National Research Council Canada - National Science Library

    Towsley, Don; Gong, Weibo; Hollot, Kris; Liu, Yong; Misra, Vishal

    2005-01-01

    .... The resulting fluid models were used to develop novel active queue management mechanisms resulting in more stable TCP performance and novel rate controllers for the purpose of providing minimum rate...

  10. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  11. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  12. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on

  13. Reduced Order Modeling Methods for Turbomachinery Design

    Science.gov (United States)

    2009-03-01

    and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En

  14. Introduction to mathematical models and methods

    Energy Technology Data Exchange (ETDEWEB)

    Siddiqi, A. H.; Manchanda, P. [Gautam Budha University, Gautam Budh Nagar-201310 (India); Department of Mathematics, Guru Nanak Dev University, Amritsar (India)

    2012-07-17

    Some well known mathematical models in the form of partial differential equations representing real world systems are introduced along with fundamental concepts of Image Processing. Notions such as seismic texture, seismic attributes, core data, well logging, seismic tomography and reservoirs simulation are discussed.

  15. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  16. Modeling Storm Surges Using Discontinuous Galerkin Methods

    Science.gov (United States)

    2016-06-01

    layer non-reflecting boundary condition (NRBC) on the right wall of the model. A NRBC is when an artificial boundary , B, is created, which truncates the... applications ,” Journal of Computational Physics, 2004. [30] P. L. Butzer and R. Weis, “On the lax equivalence theorem equipped with orders,” Journal of...closer to the shoreline. In our simulation, we also learned of the effects spurious waves can have on the results. Due to boundary conditions, a

  17. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  18. A Hybrid 3D Colon Segmentation Method Using Modified Geometric Deformable Models

    Directory of Open Access Journals (Sweden)

    S. Falahieh Hamidpour

    2007-06-01

    Full Text Available Introduction: Nowadays virtual colonoscopy has become a reliable and efficient method of detecting primary stages of colon cancer such as polyp detection. One of the most important and crucial stages of virtual colonoscopy is colon segmentation because an incorrect segmentation may lead to a misdiagnosis.  Materials and Methods: In this work, a hybrid method based on Geometric Deformable Models (GDM in combination with an advanced region growing and thresholding methods is proposed. GDM are found to be an attractive tool for structural based image segmentation particularly for extracting the objects with complicated topology. There are two main parameters influencing the overall performance of GDM algorithm; the distance between the initial contour and the actual object’s contours and secondly the stopping term which controls the deformation. To overcome these limitations, a two stage hybrid based segmentation method is suggested to extract the rough but precise initial contours at the first stage of the segmentation. The extracted boundaries are smoothed and improved using a modified GDM algorithm by improving the stopping terms of the algorithm based on the gradient value of image voxels. Results: The proposed algorithm was implemented on forty data sets each containing 400-480 slices. The results show an improvement in the accuracy and smoothness of the extracted boundaries. The improvement obtained for the accuracy of segmentation is about 6% in comparison to the one achieved by the methods based on thresholding and region growing only. Discussion and Conclusion: The extracted contours using modified GDM are smoother and finer. The improvement achieved in this work on the performance of stopping function of GDM model together with applying two stage segmentation of boundaries have resulted in a great improvement on the computational efficiency of GDM algorithm while making smoother and finer colon borders.

  19. Hydrogeological characterization of Back Forty area, Albany Research Center, Albany, Oregon

    International Nuclear Information System (INIS)

    Tsai, S.Y.; Smith, W.H.

    1983-12-01

    Radiological surveys were conducted to determine the potential migration of radionuclides from the waste area to the area commonly referred to as the Back Forty, located in the southern portion of the ARC site. The survey results indicated that parts of the Back Forty contain soils contaminated with uranium, thorium, and their associated decay products. A hydrogeologic characterization study was conducted at the Back Forty as part of an effort to more thoroughly assess radionuclide migration in the area. The objectives of the study were: (1) to define the soil characteristics and stratigraphy at the site, (2) to describe the general conditions of each geologic unit, and (3) to determine the direction and hydraulic gradient of areal groundwater flow. The site investigation activities included literature review of existing hydrogeological data for the Albany area, onsite borehold drilling, and measurement of groundwater levels. 7 references, 9 figures, 2 tables

  20. Apoptosis Governs the Elimination of Schistosoma japonicum from the Non-Permissive Host Microtus fortis

    Science.gov (United States)

    Peng, Jinbiao; Gobert, Geoffrey N.; Hong, Yang; Jiang, Weibin; Han, Hongxiao; McManus, Donald P.; Wang, Xinzhi; Liu, Jinming; Fu, Zhiqiang; Shi, Yaojun; Lin, Jiaojiao

    2011-01-01

    The reed vole, Microtus fortis, is the only known mammalian host in which schistosomes of Schistosoma japonicum are unable to mature and cause significant pathogenesis. However, little is known about how Schistosoma japonicum maturation (and, therefore, the development of schistosomiasis) is prevented in M. fortis. In the present study, the ultrastructure of 10 days post infection schistosomula from BALB/c mice and M. fortis were first compared using scanning electron microscopy and transmission electron microscopy. Electron microscopic investigations showed growth retardation and ultrastructural differences in the tegument and sub-tegumental tissues as well as in the parenchymal cells of schistosomula from M. fortis compared with those in BALB/c mice. Then, microarray analysis revealed significant differential expression between the schistosomula from the two rodents, with 3,293 down-regulated (by ≥2-fold) and 71 up-regulated (≥2 fold) genes in schistosomula from the former. The up-regulated genes included a proliferation-related gene encoding granulin (Grn) and tropomyosin. Genes that were down-regulated in schistosomula from M. fortis included apoptosis-inhibited genes encoding a baculoviral IAP repeat-containing protein (SjIAP) and cytokine-induced apoptosis inhibitor (SjCIAP), genes encoding molecules involved in insulin metabolism, long-chain fatty acid metabolism, signal transduction, the transforming growth factor (TGF) pathway, the Wnt pathway and in development. TUNEL (terminal deoxynucleotidyl transferase dUTP nick end labeling) and PI/Annexin V-FITC assays, caspase 3/7 activity analysis, and flow cytometry revealed that the percentages of early apoptotic and late apoptotic and/or necrotic cells, as well as the level of caspase activity, in schistosomula from M. fortis were all significantly higher than in those from BALB/c mice. PMID:21731652

  1. Diffusion in condensed matter methods, materials, models

    CERN Document Server

    Kärger, Jörg

    2005-01-01

    Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.

  2. Continual integration method in the polaron model

    International Nuclear Information System (INIS)

    Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.

    1981-01-01

    The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru

  3. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  4. Analysis of image findings in forty-one patients with primary lymphoma of the bone

    International Nuclear Information System (INIS)

    Yu Baohai; Liu Jie; Zhong Zhiwei; Zhao Jingpin; Peng Zhigang; Liu Jicun; Wu Wenjuan

    2011-01-01

    Objective: To analyze the imaging features of primary lymphoma of the bone, and discuss the special feature of the 'floating ice sign'. Methods: Forty-one cases of primary lymphoma of the bone in our unit from 1963.1-2009.6 were retrospectively studied. All 41 patients underwent X-ray examination, and 20 patients underwent CT examination, 12 patients underwent MR examination (3 cases simultaneously with enhancement). Results: Involvement of the flat bone was seen in 12 cases. Vertebral column was affected in 8 cases, and 17 cases showed lesions in long bones and irregular bones were involved in 4 cases. The most common location was the femur (10, 24.4%), followed by the ilium (8, 19.5%). Lesions were found in the metaphyses of the long bone in 11 cases (64.7%). 'Floating ice sign' was showed in the calcaneus of 2 patients and in the lumbar vertebra of 2 cases respectively, accounted for 9.8% of all cases. Slight bone destruction with soft tissue mass on CT image could be found in 12 cases, while obvious soft tissue mass was found in 9 cases. No periosteal reaction was found in 37 cases (90.2%). MRI examinations of 12 patients revealed soft tissue mass in 10 patients, and the extent of the lesion was larger in MR than CT. One case showed extensive bone destruction on MR but inconspicuous bone destruction on X-ray plain film and CT scan. Conclusion: Slight bone destruction with conspicuous soft tissue mass, conspicuous bone destruction on MR but slight or inconspicuous bone destruction on X-ray film and CT, could strongly imply the diagnosis of primary lymphoma of the bone. 'Floating ice sign' was a special imaging feature of primary lymphoma of the bone, which could be used as a clue for the diagnosis of lymphoma. (authors)

  5. "Method, system and storage medium for generating virtual brick models"

    DEFF Research Database (Denmark)

    2009-01-01

    An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...

  6. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  7. Laser filamentation mathematical methods and models

    CERN Document Server

    Lorin, Emmanuel; Moloney, Jerome

    2016-01-01

    This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...

  8. Models and methods of emotional concordance.

    Science.gov (United States)

    Hollenstein, Tom; Lanteigne, Dianna

    2014-04-01

    Theories of emotion generally posit the synchronized, coordinated, and/or emergent combination of psychophysiological, cognitive, and behavioral components of the emotion system--emotional concordance--as a functional definition of emotion. However, the empirical support for this claim has been weak or inconsistent. As an introduction to this special issue on emotional concordance, we consider three domains of explanations as to why this theory-data gap might exist. First, theory may need to be revised to more accurately reflect past research. Second, there may be moderating factors such as emotion regulation, context, or individual differences that have obscured concordance. Finally, the methods typically used to test theory may be inadequate. In particular, we review a variety of potential issues: intensity of emotions elicited in the laboratory, nonlinearity, between- versus within-subject associations, the relative timing of components, bivariate versus multivariate approaches, and diversity of physiological processes. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  10. METHODICAL MODEL FOR TEACHING BASIC SKI TURN

    Directory of Open Access Journals (Sweden)

    Danijela Kuna

    2013-07-01

    Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.

  11. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  12. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  13. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  14. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  15. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  16. Black and blue gas, the Gaz de France story during the last forty years

    International Nuclear Information System (INIS)

    Beltran, A.; Williot, J.P.

    1992-01-01

    This book narrates Gaz de France story during the last forty years. The author describes the great events such change from coal gas to methane (black and blue gas), building of a national distribution network, natural gas promoting, negotiating on great supply contracts, research programs. 61 refs., 12 figs., 29 photos

  17. Forty years of the Department of Nuclear Physics, 1961-2001

    International Nuclear Information System (INIS)

    Anon

    2001-01-01

    A brief report of activities of the Department of Nuclear Physics and Biophysics, Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava during forty years of history is given. A review o personnel, research programmes, graduates and master thesis, curriculum of the master study, as well as of important scientific projects is given

  18. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  19. SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tetiana D. Panchenko

    2016-07-01

    Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.

  20. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  1. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  2. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...

  3. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  4. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  5. Modelling a coal subcrop using the impedance method

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.A.; Thiel, D.V.; O' Keefe, S.G. [Griffith University, Nathan, Qld. (Australia). School of Microelectronic Engineering

    2000-07-01

    An impedance model was generated for two coal subcrops in the Biloela and Middlemount areas (Queensland, Australia). The model results were compared with actual surface impedance data. It was concluded that the impedance method satisfactorily modelled the surface response of the coal subcrops in two dimensions. There were some discrepancies between the field data and the model results, due to factors such as the method of discretization of the solution space in the impedance model and the lack of consideration of the three-dimensional nature of the coal outcrops. 10 refs., 8 figs.

  6. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  7. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  8. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  9. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  10. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...

  11. Solving the nuclear shell model with an algebraic method

    International Nuclear Information System (INIS)

    Feng, D.H.; Pan, X.W.; Guidry, M.

    1997-01-01

    We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)

  12. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  13. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  14. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  15. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  16. Method of modeling the cognitive radio using Opnet Modeler

    OpenAIRE

    Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.

    2012-01-01

    This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...

  17. Modeling of indoor/outdoor fungi relationships in forty-four homes

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, M.J.

    1996-12-31

    From April through October 1994, a study was conducted in the Moline, Illinois-Bettendorf, Iowa area to measure bioaerosol concentrations in 44 homes housing a total of 54 asthmatic individuals. Air was sampled 3 to 10 times at each home over a period of seven months. A total of 852 pairs of individual samples were collected indoors at up to three locations (basement, kitchen, bedroom, or living room) and outside within two meters of each house.

  18. Far-UV observations of comet C/2012 S1 (ISON) with FORTIS

    Science.gov (United States)

    McCandliss, Stephan R.; Feldman, Paul D.; Weaver, Harold A.; Fleming, Brian; Redwine, Keith; Li, Mary J.; Kutyrev, Alexander; Moseley, Samuel H.

    2015-01-01

    Far-UV imagery and objective grating spectroscopy of comet C/2012 S1 (ISON) were acquired from NASA sounding rocket 36.296 UG, launched on 20 November 2013 at 04:40 MST (20.48 Nov 2013 UT), 8.32 days pre-perihelion, from the White Sands Missile Range, NM. The comet was 0.1° below ground horizon, 0.44 AU from the Sun, 0.86 AU from the Earth, and at a solar elongation of 26.3°. The payload reached an apogee of 279 km and the total time pointed at the comet was 353 s. At the time of launch ISON was undergoing a factor of 5 increase in water production rate, going from 3.5e29 to 19.6e29 molecules s-1between 19.6 and 21.6 Nov (Combi et al. 2014), marking what is thought to be a final fragmentation event (Sekanina & Kracht 2014). Our instrument, a wide-field multi-object spectro-telescope called FORTIS (Far-UV Off Rowland-circle Telescope for Imaging and Spectroscopy), observed Lyα emissions in an objective grating mode through an open microshutter array, developed at the Goddard Space Flight Center, over a (1/2°)2 field-of-view. After accounting for slit losses and deadtime corrections we find a preliminary lower limit to the Lyα surface brightness of ~ 400 kilorayleighs, yielding a hydrogen production rate of QH ~ 5e29 atoms s-1, in reasonable agreement with the Combi result. We also acquired a broadband image of the comet in the 1280 to 1900 Å bandpass. This image shows a drop in count rate proportional to altitude caused by increased absorption of cometary emissions by terrestrial O2 located in the lower thermosphere. O2 absorption acts as a selective time dependent filter that attenuates cometary emissions from different atomic and molecular species at different rates during descent. Preliminary analysis suggests that the dominant species in a (1e5 km)2 nuclear region is neutral carbon. The radial profile in comparison to a Haser model suggests that the C parent molecule had a lifetime (at 1 AU) ~ 105 s; much shorter than the expected lifetime of CO. We

  19. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  20. Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence

    CERN Document Server

    Hutter, Kolumban

    2004-01-01

    The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.

  1. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  2. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  3. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  4. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  5. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  6. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  7. FORTY PLUS CLUBS AND WHITE-COLLAR MANHOOD DURING THE GREAT DEPRESSION

    Directory of Open Access Journals (Sweden)

    Gregory Wood

    2008-01-01

    Full Text Available As scholars of gender and labor have argued, chronic unemployment during the Great Depression precipitated a “crisis” of masculinity, compelling men to turn towards new industrial unions and the New Deal as ways to affirm work, breadwinning, and patriarchy as bases for manhood. But did all men experience this crisis? During the late 1930s, white-collar men organized groups called “Forty Plus Clubs” in response to their worries about joblessness and manhood. The clubs made it possible for unemployed executives to find new jobs, while at the same time recreating the male-dominated culture of the white-collar office. For male executives, Forty Plus Clubs precluded the Depression-era crisis of manhood, challenging the idea that the absence ofpaid employment was synonymous with the loss of masculinity.

  8. Risoe National Laboratory - Forty years of research in a changing society

    International Nuclear Information System (INIS)

    Nielsen, H.; Nielsen, K.; Petersen, F.; Siggaard Jensen, H.

    1998-01-01

    The creation of Risoe forty years ago was one of the largest, single investments in Danish research. The intention was to realise Niels Bohr's visions of the peaceful use in Denmark og nuclear energy for electricity production and other purposes. Risoe decided to take the opportunity of its 40th anniversary in 1998 to have its history written in a form that would contribute to the history of modern Denmark. The result was a book in Danish entitled Til samfundets tarv - Forskningscenter Risoes historie. The present text is a slightly reworked translation of the last chapter of that book. It contains a summary of Risoe's history and some reflections on forty years of change. Change in Danish society at large, in research policy, in energy policy, in technological expectations. Changes at Risoe, in leadership, in organisational structure, in strategy and in fields of research. Some of Risoe's largest projects are briefly characterised. (LN)

  9. Method for modeling social care processes for national information exchange.

    Science.gov (United States)

    Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit

    2012-01-01

    Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.

  10. [A new method of fabricating photoelastic model by rapid prototyping].

    Science.gov (United States)

    Fan, Li; Huang, Qing-feng; Zhang, Fu-qiang; Xia, Yin-pei

    2011-10-01

    To explore a novel method of fabricating the photoelastic model using rapid prototyping technique. A mandible model was made by rapid prototyping with computerized three-dimensional reconstruction, then the photoelastic model with teeth was fabricated by traditional impression duplicating and mould casting. The photoelastic model of mandible with teeth, which was fabricated indirectly by rapid prototyping, was very similar to the prototype in geometry and physical parameters. The model was of high optical sensibility and met the experimental requirements. Photoelastic model of mandible with teeth indirectly fabricated by rapid prototyping meets the photoelastic experimental requirements well.

  11. Forty years on and still going strong : the use of hominin-cercopithecid comparisons in palaeoanthropology.

    OpenAIRE

    Elton, S.

    2006-01-01

    Hominin-cercopithecid comparisons have been used in palaeoanthropology for over forty years. Fossil cercopithecids can be used as a ‘control group’ to contextualize the adaptations and evolutionary trends of hominins. Observations made on modern cercopithecids can also be applied to questions about human evolution. This article reviews the history of hominin-cercopithecid comparisons, assesses the strengths and weaknesses of cercopithecids as comparators in studies of human evolution, and use...

  12. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  13. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  14. A numerical method for a transient two-fluid model

    International Nuclear Information System (INIS)

    Le Coq, G.; Libmann, M.

    1978-01-01

    The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR

  15. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  16. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  17. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging ..... 10 kriging models with different parameters were also obtained. ..... shapes using stochastic optimization methods and com-.

  18. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal; Hadwiger, Markus

    2016-01-01

    processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling

  19. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  20. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  1. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  2. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  3. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  4. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  5. Two updating methods for dissipative models with non symmetric matrices

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Aubry, D.

    1997-01-01

    In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author)

  6. A sediment graph model based on SCS-CN method

    Science.gov (United States)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  7. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  8. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate....... An illustrative synthetic example is analyzed, and prediction accuracy measures are compared between the different variants...

  9. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  10. Attitude Research in Science Education: Contemporary Models and Methods.

    Science.gov (United States)

    Crawley, Frank E.; Kobala, Thomas R., Jr.

    1994-01-01

    Presents a summary of models and methods of attitude research which are embedded in the theoretical tenets of social psychology and in the broader framework of constructivism. Focuses on the construction of social reality rather than the construction of physical reality. Models include theory of reasoned action, theory of planned behavior, and…

  11. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  12. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  13. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  14. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  15. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  16. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  17. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  18. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  19. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  20. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  1. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  2. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  3. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  4. Modelling viscoacoustic wave propagation with the lattice Boltzmann method.

    Science.gov (United States)

    Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen

    2017-08-31

    In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.

  5. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  6. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  7. Generalized framework for context-specific metabolic model extraction methods

    Directory of Open Access Journals (Sweden)

    Semidán eRobaina Estévez

    2014-09-01

    Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.

  8. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  9. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  10. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  11. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  12. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Science.gov (United States)

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  13. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  14. Maxillary distraction osteogenesis at Le Fort-I level induces bone apposition at infraorbital rim.

    Science.gov (United States)

    Rattan, Vidya; Jena, Ashok Kumar; Singh, Satinder Pal; Utreja, Ashok Kumar

    2014-09-01

    The aim of this study is to evaluate whether there is any remodeling of bone at infraorbital rim following maxillary distraction osteogenesis (DO) at Le Fort-I level. Twelve adult subjects in the age range of 17-21 years with complete unilateral cleft lip and palate underwent advancement of the maxilla by DO. The effect of maxillary DO on the infraorbital rim remodeling was evaluated from lateral cephalograms recorded prior to the DO (T0), at the end of DO (T1), and at least 2-years after the DO (T2) by Walker's analysis. The ANOVA and two-tailed t test were used and probability value (P value) 0.05 was considered as statistically significant level. There was anterior movement of maxilla by 9.22 ± 3.27 mm and 7.67 ± 3.99 mm at the end of immediate (T1) and long-term (T2) follow-up of maxillary DO, respectively. The Walker's analysis showed 1.49 ± 1.22 mm and 2.31 ± 1.81 mm anterior movement of the infraorbital margin (Orbitale point) at the end of T1 and T2, respectively (P distraction osteogenesis at Le Fort-I level induced significant bone apposition at infraorbital rim. Patients with mild midface hypoplasia who would otherwise may be candidates for osteotomy at Le Fort-II or Le Fort-III level may benefit from maxillary distraction at Le Fort-I level.

  15. Neuropeptides in the desert ant Cataglyphis fortis: Mass spectrometric analysis, localization, and age-related changes.

    Science.gov (United States)

    Schmitt, Franziska; Vanselow, Jens T; Schlosser, Andreas; Wegener, Christian; Rössler, Wolfgang

    2017-03-01

    Cataglyphis desert ants exhibit an age-related polyethism, with ants performing tasks in the dark nest for the first ∼4 weeks of their adult life before they switch to visually based long-distance navigation to forage. Although behavioral and sensory aspects of this transition have been studied, the internal factors triggering the behavioral changes are largely unknown. We suggest the neuropeptide families allatostatin A (AstA), allatotropin (AT), short neuropeptide F (sNPF), and tachykinin (TK) as potential candidates. Based on a neuropeptidomic analysis in Camponotus floridanus, nano-LC-ESI MS/MS was used to identify these neuropeptides biochemically in Cataglyphis fortis. Furthermore, we show that all identified peptide families are present in the central brain and ventral ganglia of C. fortis whereas in the retrocerebral complex only sNPF could be detected. Immunofluorescence staining against AstA, AT, and TK in the brain revealed arborizations of AstA- and TK-positive neurons in primary sensory processing centers and higher order integration centers, whereas AT immunoreactivity was restricted to the central complex, the antennal mechanosensory and motor center, and the protocerebrum. For artificially dark-kept ants, we found that TK distribution changed markedly in the central complex from days 1 and 7 to day 14 after eclosion. Based on functional studies in Drosophila, this age-related variation of TK is suggestive of a modulatory role in locomotion behavior in C. fortis. We conclude that the general distribution and age-related changes in neuropeptides indicate a modulatory role in sensory input regions and higher order processing centers in the desert ant brain. J. Comp. Neurol. 525:901-918, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  17. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  18. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  19. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  20. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  1. Modes of failure of Osteonics constrained tripolar implants: a retrospective analysis of forty-three failed implants.

    Science.gov (United States)

    Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E

    2008-07-01

    The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.

  2. A Pansharpening Method Based on HCT and Joint Sparse Model

    Directory of Open Access Journals (Sweden)

    XU Ning

    2016-04-01

    Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.

  3. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    Science.gov (United States)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  4. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  5. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  6. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  7. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  8. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  9. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  10. Discrete gradient methods for solving variational image regularisation models

    International Nuclear Information System (INIS)

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  11. A meshless method for modeling convective heat transfer

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David B [Los Alamos National Laboratory

    2010-01-01

    A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.

  12. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  13. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  14. The Langevin method and Hubbard-like models

    International Nuclear Information System (INIS)

    Gross, M.; Hamber, H.

    1989-01-01

    The authors reexamine the difficulties associated with application of the Langevin method to numerical simulation of models with non-positive definite statistical weights, including the Hubbard model. They show how to avoid the violent crossing of the zeroes of the weight and how to move those nodes away from the real axis. However, it still appears necessary to keep track of the sign (or phase) of the weight

  15. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  16. An alternative method for centrifugal compressor loading factor modelling

    Science.gov (United States)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  17. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  19. Annular dispersed flow analysis model by Lagrangian method and liquid film cell method

    International Nuclear Information System (INIS)

    Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.

    2003-01-01

    A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected

  20. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  1. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  2. Toric Lego: A method for modular model building

    CERN Document Server

    Balasubramanian, Vijay; García-Etxebarria, Iñaki

    2010-01-01

    Within the context of local type IIB models arising from branes at toric Calabi-Yau singularities, we present a systematic way of joining any number of desired sectors into a consistent theory. The different sectors interact via massive messengers with masses controlled by tunable parameters. We apply this method to a toy model of the minimal supersymmetric standard model (MSSM) interacting via gauge mediation with a metastable supersymmetry breaking sector and an interacting dark matter sector. We discuss how a mirror procedure can be applied in the type IIA case, allowing us to join certain intersecting brane configurations through massive mediators.

  3. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  4. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  5. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  6. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.

    1994-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  7. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  8. Modelling of Granular Materials Using the Discrete Element Method

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1997-01-01

    With the Discrete Element Method it is possible to model materials that consists of individual particles where a particle may role or slide on other particles. This is interesting because most of the deformation in granular materials is due to rolling or sliding rather that compression of the gra...

  9. Moderation instead of modelling: some arguments against formal engineering methods

    NARCIS (Netherlands)

    Rauterberg, G.W.M.; Sikorski, M.; Rauterberg, G.W.M.

    1998-01-01

    The more formal the used engineering techniques are, the less non-technical facts can be captured. Several business process reengineering and software development projects fail, because the project management concentrates to much on formal methods and modelling approaches. A successful change of

  10. The research methods and model of protein turnover in animal

    International Nuclear Information System (INIS)

    Wu Xilin; Yang Feng

    2002-01-01

    The author discussed the concept and research methods of protein turnover in animal body. The existing problems and the research results of animal protein turnover in recent years were presented. Meanwhile, the measures to improve the models of animal protein turnover were analyzed

  11. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  12. Ethnographic Decision Tree Modeling: A Research Method for Counseling Psychology.

    Science.gov (United States)

    Beck, Kirk A.

    2005-01-01

    This article describes ethnographic decision tree modeling (EDTM; C. H. Gladwin, 1989) as a mixed method design appropriate for counseling psychology research. EDTM is introduced and located within a postpositivist research paradigm. Decision theory that informs EDTM is reviewed, and the 2 phases of EDTM are highlighted. The 1st phase, model…

  13. Heat bath method for the twisted Eguchi-Kawai model

    International Nuclear Information System (INIS)

    Fabricius, K.; Haan, O.

    1984-01-01

    We reformulate the twisted Eguchi-Kawaii model in a way that allows us to use the heat bath method for the updating procedure of the link matrices. This new formulation is more efficient by a factor of 2.5 in computer time and of 2.3 in memory need. (orig.)

  14. Heat bath method for the twisted Eguchi-Kawai model

    Energy Technology Data Exchange (ETDEWEB)

    Fabricius, K.; Haan, O.

    1984-08-16

    We reformulate the twisted Eguchi-Kawaii model in a way that allows us to use the heat bath method for the updating procedure of the link matrices. This new formulation is more efficient by a factor of 2.5 in computer time and of 2.3 in memory need.

  15. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1990-01-01

    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information

  16. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  17. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  18. Accident Analysis Methods and Models — a Systematic Literature Review

    NARCIS (Netherlands)

    Wienen, Hans Christian Augustijn; Bukhsh, Faiza Allah; Vriezekolk, E.; Wieringa, Roelf J.

    2017-01-01

    As part of our co-operation with the Telecommunication Agency of the Netherlands, we want to formulate an accident analysis method and model for use in incidents in telecommunications that cause service unavailability. In order to not re-invent the wheel, we wanted to first get an overview of all

  19. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  20. Method for modeling post-mortem biometric 3D fingerprints

    Science.gov (United States)

    Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.

    2016-05-01

    Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.

  1. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  2. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  3. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  4. An efficient method for model refinement in diffuse optical tomography

    Science.gov (United States)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  5. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  6. Models and methods for hot spot safety work

    DEFF Research Database (Denmark)

    Vistisen, Dorte

    2002-01-01

    Despite the fact that millions DKK each year are spent on improving roadsafety in Denmark, funds for traffic safety are limited. It is therefore vital to spend the resources as effectively as possible. This thesis is concerned with the area of traffic safety denoted "hot spot safety work", which...... is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time...

  7. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  8. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  9. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  10. Model of coupling with core in the Green function method

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tselyaev, V.I.

    1983-01-01

    Models of coupling with core in the method of the Green functions, presenting generalization of conventional method of chaotic phases, i.e. account of configurations of more complex than monoparticle-monohole (1p1h) configurations, have been considered. Odd nuclei are studied only to the extent when the task of odd nucleus is solved for even-even nucleus. Microscopic model of the account of delay effects in mass operator M=M(epsilon), which corresponds to the account of the effects influence only on the change of quasiparticle behaviour in magic nucleus as compared with their behaviour, described by pure model of cores, has been considered. The change results in fragmentation of monoparticle levels, which is the main effect, and in the necessity to use new basis as compared with the shell one, corresponding to inoculative quasiparticles. When formulas have been devived concrete type of mass operator M(epsilon) is not used

  11. Developing energy forecasting model using hybrid artificial intelligence method

    Institute of Scientific and Technical Information of China (English)

    Shahram Mollaiy-Berneti

    2015-01-01

    An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.

  12. Enzymatic degradation of hybrid iota-/nu-carrageenan by Alteromonas fortis iota-carrageenase.

    Science.gov (United States)

    Jouanneau, Diane; Boulenguer, Patrick; Mazoyer, Jacques; Helbert, William

    2010-05-07

    Hybrid iota-/nu-carrageenan was water-extracted from Eucheuma denticulatum and incubated with Alteromonas fortis iota-carrageenase. The degradation products were then separated by anion-exchange chromatography. The three most abundant fractions of hybrid iota-/nu-carrageenan oligosaccharides were purified and their structures were analyzed by NMR. The smallest hybrid was an octasaccharide with a iota-iota-nu-iota structure. The second fraction was composed of two decasaccharides with iota-iota-iota-nu-iota and iota-[iota/nu]-iota-iota structures. The third fraction was a mixture of dodecasaccharides which contained at least a iota-iota-iota-iota-nu-iota oligosaccharide. The carbon and proton NMR spectra of the octasaccharides were completely assigned, thereby completely attributing the nu-carrabiose moiety for the first time.

  13. Forty years abuse of baking soda, rhabdomyolysis, glomerulonephritis, hypertension leading to renal failure: a case report.

    Science.gov (United States)

    Forslund, Terje; Koistinen, Arvo; Anttinen, Jorma; Wagner, Bodo; Miettinen, Marja

    2008-01-01

    We present a patient who had ingested sodium bicarbonate for treatment of alcoholic dyspepsia during forty years at increasing doses. During the last year he had used more than 50 grams daily. He presented with metabolic alkalosis, epileptic convulsions, subdural hematoma, hypertension and rhabdomyolysis with end stage renal failure, for which he had to be given regular intermittent hemodialysis treatment. Untreated hypertension and glomerulonephritis was probably present prior to all these acute incidents. Examination of the kidney biopsy revealed mesangial proliferative glomerulonephritis and arterial wall thickening causing nephrosclerosis together with interstitial calcinosis. The combination of all these pathologic changes might be responsible for the development of progressive chronic renal failure ending up with the need for continuous intermittent hemodialysis treatment.

  14. Forty Years Abuse of Baking Soda, Rhabdomyolysis, Glomerulonephritis, Hypertension Leading to Renal Failure: A Case Report

    Directory of Open Access Journals (Sweden)

    Terje Forslund M.D., Ph.D.

    2008-01-01

    Full Text Available We present a patient who had ingested sodium bicarbonate for treatment of alcoholic dyspepsia during forty years at increasing doses. During the last year he had used more than 50 grams daily. He presented with metabolic alkalosis, epileptic convulsions, subdural hematoma, hypertension and rhabdomyolysis with end stage renal failure, for which he had to be given regular intermittent hemodialysis treatment. Untreated hypertension and glomerulonephritis was probably present prior to all these acute incidents. Examination of the kidney biopsy revealed mesangial proliferative glomerulonephritis and arterial wall thickening causing nephrosclerosis together with interstitial calcinosis. The combination of all these pathologic changes might be responsible for the development of progressive chronic renal failure ending up with the need for continuous intermittent hemodialysis treatment.

  15. History of wheat cultivars released by Embrapa in forty years of research

    Directory of Open Access Journals (Sweden)

    Eduardo Caierão

    2014-11-01

    Full Text Available In forty years of genetic breeding of wheat, Embrapa (Brazilian Agricultural Research Corporation has developed over a hundred new cultivars for different regions of Brazil. Information regarding identification of these cultivars is often requested from Embrapa breeders. Data on year of release, name of pre-commercial line, the cross made, and the company unit responsible for indication of the cultivar are not always easily accessible and are often scattered throughout different documents. The aim of this study was to conduct a historical survey of all the wheat cultivars released by Embrapa, aggregating the information in a single document. Since 1974, Embrapa has released 112 wheat cultivars, including 12 by Embrapa Soybean - CNPSo (Londrina, PR, 14 by Embrapa Cerrado - CPAC (Brasília, DF, 9 by Embrapa Agropecuária Oeste - CPAO (Dourados, MS, and 77 by Embrapa Wheat - CNPT (Passo Fundo, RS.

  16. My Forty-Year Adventure in the Wonderful World of Fiber Optics!

    Science.gov (United States)

    Hodara, Henri

    2016-11-01

    In the first part of this presentation, I review the key technology developments of the last century up to the present. These developments are what led us to the communication and information revolution. This is followed by a discussion of how the use of optical fibers brought about the fusion of these two elements and the resultant proliferation of smart phones and social networks. In the second part, I recollect some of my work in fiber optics over a period of forty years in the context of those key developments. In particular, I stress what it takes for a company small in comparison to the giants in the field to capture niche markets. I also discuss the criteria that are needed to justify the application of a new technology like optical fibers to existing communication and sensing systems, and make it cost effective. I end this presentation with a few personal considerations regarding technology developments and innovation.

  17. Forty-Seven DJs, Four Women: Meritocracy, Talent, and Postfeminist Politics

    Directory of Open Access Journals (Sweden)

    Tami Gadir

    2017-11-01

    Full Text Available In 2016, only four of forty-seven DJs booked for Musikkfest, a festival in Oslo, Norway, were women. Following this, a local DJ published an objection to this imbalance in a local arts and entertainment magazine. Her editorial provoked booking agents to defend their position on the grounds that they prioritise skill and talent when booking DJs, and by implication, that they do not prioritise equality. The booking agents’ responses, on social media and in interviews I conducted, highlight their perpetuation of a status quo in dance music cultures where men disproportionately dominate the role of DJing. Labour laws do not align with this cultural attitude: gender equality legislation in Norway’s recent history contrasts the postfeminist attitudes expressed by dance music’s cultural intermediaries such as DJs and booking agents. The Musikkfest case ultimately shows that gender politics in dance music cultures do not necessarily correspond to dance music’s historical associations with egalitarianism.

  18. Unicriterion Model: A Qualitative Decision Making Method That Promotes Ethics

    Directory of Open Access Journals (Sweden)

    Fernando Guilherme Silvano Lobo Pimentel

    2011-06-01

    Full Text Available Management decision making methods frequently adopt quantitativemodels of several criteria that bypass the question of whysome criteria are considered more important than others, whichmakes more difficult the task of delivering a transparent viewof preference structure priorities that might promote ethics andlearning and serve as a basis for future decisions. To tackle thisparticular shortcoming of usual methods, an alternative qualitativemethodology of aggregating preferences based on the rankingof criteria is proposed. Such an approach delivers a simpleand transparent model for the solution of each preference conflictfaced during the management decision making process. Themethod proceeds by breaking the decision problem into ‘two criteria– two alternatives’ scenarios, and translating the problem ofchoice between alternatives to a problem of choice between criteriawhenever appropriate. The unicriterion model method is illustratedby its application in a car purchase and a house purchasedecision problem.

  19. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  20. Ecoimmunity in Darwin's finches: invasive parasites trigger acquired immunity in the medium ground finch (Geospiza fortis.

    Directory of Open Access Journals (Sweden)

    Sarah K Huber

    Full Text Available BACKGROUND: Invasive parasites are a major threat to island populations of animals. Darwin's finches of the Galápagos Islands are under attack by introduced pox virus (Poxvirus avium and nest flies (Philornis downsi. We developed assays for parasite-specific antibody responses in Darwin's finches (Geospiza fortis, to test for relationships between adaptive immune responses to novel parasites and spatial-temporal variation in the occurrence of parasite pressure among G. fortis populations. METHODOLOGY/PRINCIPAL FINDINGS: We developed enzyme-linked immunosorbent assays (ELISAs for the presence of antibodies in the serum of Darwin's finches specific to pox virus or Philornis proteins. We compared antibody levels between bird populations with and without evidence of pox infection (visible lesions, and among birds sampled before nesting (prior to nest-fly exposure versus during nesting (with fly exposure. Birds from the Pox-positive population had higher levels of pox-binding antibodies. Philornis-binding antibody levels were higher in birds sampled during nesting. Female birds, which occupy the nest, had higher Philornis-binding antibody levels than males. The study was limited by an inability to confirm pox exposure independent of obvious lesions. However, the lasting effects of pox infection (e.g., scarring and lost digits were expected to be reliable indicators of prior pox infection. CONCLUSIONS/SIGNIFICANCE: This is the first demonstration, to our knowledge, of parasite-specific antibody responses to multiple classes of parasites in a wild population of birds. Darwin's finches initiated acquired immune responses to novel parasites. Our study has vital implications for invasion biology and ecological immunology. The adaptive immune response of Darwin's finches may help combat the negative effects of parasitism. Alternatively, the physiological cost of mounting such a response could outweigh any benefits, accelerating population decline. Tests

  1. Target of Opportunity - Far-UV Observations of Comet ISON with FORTIS

    Science.gov (United States)

    McCandliss, Stephan

    The goal of this one year program is to acquire spectra and imagery of the sungrazing Oort cloud comet known as ISON in the far-UV bandpass between 800 -- 1950 Angstroms over a 1/2 degree field-of-view (FOV), during its ingress and egress from the sun. This bandpass and FOV provides access to a particularly rich set of spectral diagnostics for determining the volatile production rates of CO, H, C, C+, O and S, and to search for previously undetected atomic and molecular species such as Ar, N, N+, N2, O+ and O5+. We are particularly interested in searching for compositional changes associated with the intense heating episode at the comet's perihelion to address an outstanding question in cometary research; do Oort cloud comets carry a chemical composition similar to the proto-stellar molecular cloud from which the Solar System formed? Sounding rockets are uniquely suited to observing cometary emissions in the far-UV as they can point to within 25 degrees of the sun, whereas HST is limited to observations at angles greater than 50 degrees. The projected ephemeris of this comet shows that on ingress it is expected to reach ~ +4 mag at 25 degrees from the sun on 21 November 2013 and, should it survive its trip to within 2.7 Rsun from the sun, it is expected to reach a similar magnitude during egress at 25 degrees on 08 December 2013. This will be a reflight of the JHU sounding rocket borne spectro-telescope called FORTIS, currently scheduled to fly in May of 2013 on NASA sounding rocket 36.268 UG. The instrumental configuration of FORTIS is uniquely suited to accomplishing the goals of this task.

  2. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  3. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  4. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  5. Modeling of radionuclide migration through porous material with meshless method

    International Nuclear Information System (INIS)

    Vrankar, L.; Turk, G.; Runovc, F.

    2005-01-01

    To assess the long term safety of a radioactive waste disposal system, mathematical models are used to describe groundwater flow, chemistry and potential radionuclide migration through geological formations. A number of processes need to be considered when predicting the movement of radionuclides through the geosphere. The most important input data are obtained from field measurements, which are not completely available for all regions of interest. For example, the hydraulic conductivity as an input parameter varies from place to place. In such cases geostatistical science offers a variety of spatial estimation procedures. Methods for solving the solute transport equation can also be classified as Eulerian, Lagrangian and mixed. The numerical solution of partial differential equations (PDE) is usually obtained by finite difference methods (FDM), finite element methods (FEM), or finite volume methods (FVM). Kansa introduced the concept of solving partial differential equations using radial basis functions (RBF) for hyperbolic, parabolic and elliptic PDEs. Our goal was to present a relatively new approach to the modelling of radionuclide migration through the geosphere using radial basis function methods in Eulerian and Lagrangian coordinates. Radionuclide concentrations will also be calculated in heterogeneous and partly heterogeneous 2D porous media. We compared the meshless method with the traditional finite difference scheme. (author)

  6. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  7. Evaluation of internal noise methods for Hotelling observer models

    International Nuclear Information System (INIS)

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-01-01

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality

  8. A Review of Distributed Parameter Groundwater Management Modeling Methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-04-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  9. Characterization of Forty Seven Years of Particulate Chemical Composition in the Finnish Arctic

    Science.gov (United States)

    Laing, James

    Forty seven years of weekly total suspended particle filters collected at Kevo, Finland from October 1964 through 2010 by the Finnish Meteorological Institute were analyzed for near-total trace elements, soluble trace elements, black carbon (BC), and major ions and methane sulfonic acid (MSA). Kevo is located in Northern Finland, 350 km north of the Arctic Circle. The samples from 1964-1978 were collected with Whatman 42 cellulous filters and the samples from 1979-2010 collected on Whatman GF/A glass-fiber filters. A portion of the filters was microwave acid-digested (ad) and analyzed for near-total trace elements were determined by inductively coupled plasma mass spectrometry (ICP-MS). Another portion was water extracted (we) and analyzed for soluble trace elements by ICP-MS and ionic species by ion chromatography (IC). Black carbon (BC) was determined using optical and thermal optical techniques at SUNY Albany. A clear seasonal trend with winter/spring maxima and summer minima is observed for most species attributed to enhanced transport of pollutants from anthropogenic mid-latitude sources to the Arctic in the winter and early spring. Compared to more remote Arctic sampling sites, species of anthropogenic origin (V, Co, Cu, Ni, As, Cd, Pb, SO4) have significantly higher concentrations and a less pronounced seasonality. High concentrations of Cu (14.1 ng/m3), Ni (0.97 ng/m3), and Co (0.04 ng/m3) indicate the influence of non-ferrous metal smelters on the Kola Peninsula, although Cu unexpectedly did not correlate with Ni or Co. Ni and Co were highly correlated. Significant long-term decreasing trends were detected for most species. All constituents except Sn-ad, Re-ad, Sn-we, Mo-we, V-we, have significant (p sea salt SO4 concentrations were found to have a very similar trend to European and Former Soviet Union SO2 emissions. SO4 concentrations declined dramatically in the early 1990s a result of the collapse of the Soviet Union. Potential source contribution

  10. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  11. Coarse Analysis of Microscopic Models using Equation-Free Methods

    DEFF Research Database (Denmark)

    Marschler, Christian

    of these models might be high-dimensional, the properties of interest are usually macroscopic and lowdimensional in nature. Examples are numerous and not necessarily restricted to computer models. For instance, the power output, energy consumption and temperature of engines are interesting quantities....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...... in these models (learning outcome, traffic jam density, oscillation period), but also allow to investigate unstable solutions, which are important information to determine basins of attraction of stable solutions and thereby reveal information on the long-term behavior of an initial state....

  12. Numerical methods for the Lévy LIBOR model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    2010-01-01

    but the methods are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure....... This enables simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the L\\'evy LIBOR model of Eberlein and \\"Ozkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates...

  13. Numerical Methods for the Lévy LIBOR Model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...

  14. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  15. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  16. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  17. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  18. Character expansion methods for matrix models of dually weighted graphs

    International Nuclear Information System (INIS)

    Kazakov, V.A.; Staudacher, M.; Wynter, T.

    1996-01-01

    We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphs possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problem of phase transitions from random to flat lattices. (orig.). With 4 figs

  19. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  20. A Method to Identify Flight Obstacles on Digital Surface Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Min; LIN Xinggang; SUN Shouyu; WANG Youzhi

    2005-01-01

    In modern low-altitude terrain-following guidance, a constructing method of the digital surface model (DSM) is presented in the paper to reduce the threat to flying vehicles of tall surface features for safe flight. The relationship between an isolated obstacle size and the intervals of vertical- and cross-section in the DSM model is established. The definition and classification of isolated obstacles are proposed, and a method for determining such isolated obstacles in the DSM model is given. The simulation of a typical urban district shows that when the vertical- and cross-section DSM intervals are between 3 m and 25 m, the threat to terrain-following flight at low-altitude is reduced greatly, and the amount of data required by the DSM model for monitoring in real time a flying vehicle is also smaller. Experiments show that the optimal results are for an interval of 12.5 m in the vertical- and cross-sections in the DSM model, with a 1:10 000 DSM scale grade.

  1. Impacts modeling using the SPH particulate method. Case study

    International Nuclear Information System (INIS)

    Debord, R.

    1999-01-01

    The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)

  2. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  3. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  4. Finite-element method modeling of hyper-frequency structures

    International Nuclear Information System (INIS)

    Zhang, Min

    1990-01-01

    The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite element method with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr

  5. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  6. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  7. Modeling of electromigration salt removal methods in building materials

    DEFF Research Database (Denmark)

    Johannesson, Björn; Ottosen, Lisbeth M.

    2008-01-01

    for salt attack of various kinds, is one potential method to preserve old building envelopes. By establishing a model for ionic multi-species diffusion, which also accounts for external applied electrical fields, it is proposed that an important complement to the experimental tests and that verification...... with its ionic mobility properties. It is, further, assumed that Gauss’s law can be used to calculate the internal electrical field induced by the diffusion it self. In this manner the external electrical field applied can be modeled, simply, by assigning proper boundary conditions for the equation...

  8. (Environmental and geophysical modeling, fracture mechanics, and boundary element methods)

    Energy Technology Data Exchange (ETDEWEB)

    Gray, L.J.

    1990-11-09

    Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

  9. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  10. Stress description model by non destructive magnetic methods

    International Nuclear Information System (INIS)

    Flambard, C.; Grossiord, J.L.; Tourrenc, P.

    1983-01-01

    Since a few years, CETIM investigates analysis possibilities of materials, by developing a method founded on observation of ferromagnetic noise. By experiments, correlations have become obvious between state of the material and recorded signal. These correlations open to industrial applications to measure stresses and strains in elastic and plastic ranges. This article starts with a brief historical account and theoretical backgrounds of the method. The experimental frame of this research is described, and the main results are analyzed. Theoretically, a model was built up, and we present it. It seems in agreement with some experimental observations. The main results concerning stress application, thermal and surface treatments (decarbonizing) are presented [fr

  11. Alternative wind power modeling methods using chronological and load duration curve production cost models

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M R

    1996-04-01

    As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.

  12. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... of the TS geometry on the flexibility of the system has been probed by fixing layers of atoms around the active site and using increasingly larger nonbonded cutoffs. The variability over the 20 structures is found to decrease as the system is made more flexible. Relative energies have been calculated...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...

  13. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  14. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  15. Linear facility location in three dimensions - Models and solution methods

    DEFF Research Database (Denmark)

    Brimberg, Jack; Juel, Henrik; Schöbel, Anita

    2002-01-01

    We consider the problem of locating a line or a line segment in three-dimensional space, such that the sum of distances from the facility represented by the line (segment) to a given set of points is minimized. An example is planning the drilling of a mine shaft, with access to ore deposits through...... horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed, and efficient solution methods are given....

  16. Chebyshev super spectral viscosity method for a fluidized bed model

    International Nuclear Information System (INIS)

    Sarra, Scott A.

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations

  17. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  18. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  19. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  20. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  1. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  2. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  3. Statistical methods for mechanistic model validation: Salt Repository Project

    International Nuclear Information System (INIS)

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  4. Modern Methods for Modeling Change in Obesity Research in Nursing.

    Science.gov (United States)

    Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E

    2017-08-01

    Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.

  5. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  6. Modelling magnetic polarisation J 50 by different methods

    International Nuclear Information System (INIS)

    Yonamine, Taeko; Campos, Marcos F. de; Castro, Nicolau A.; Landgraf, Fernando J.G.

    2006-01-01

    Two different methods for modelling the angular behaviour of magnetic polarisation at 5000 A/m (J 50 ) of electrical steels were evaluated and compared. Both methods are based upon crystallographic texture data. The texture of non-oriented electrical steels with silicon content ranging from 0.11 to 3%Si was determined by X-ray diffraction. In the first method, J 50 was correlated to the calculated value of the average anisotropy energy in each direction, using texture data. In the second method, the first three coefficients of the spherical harmonic series of the ODF and two experimental points were used to estimate the angular variation of J 50 . The first method allows the estimation of J 50 for samples with different textures and Si contents using only the texture data, with no need of magnetic measurement, and this is advantageous, because texture data can be acquired with less than 2 g of material. The second method may give better adjust in some situations but besides the texture data, it requests magnetic measurements in at least two directions, for example, rolling and transverse directions

  7. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  8. Hybrid Modeling Method for a DEP Based Particle Manipulation

    Directory of Open Access Journals (Sweden)

    Mohamad Sawan

    2013-01-01

    Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  9. Nuclear-fuel-cycle optimization: methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables

  10. A Method for Modeling of Floating Vertical Axis Wind Turbine

    DEFF Research Database (Denmark)

    Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir

    2013-01-01

    It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine....... This integrated dynamic model takes into account the wind inflow, aerodynamics, hydrodynamics, structural dynamics (wind turbine, floating platform and the mooring lines) and a generator control. This approach calculates dynamic equilibrium at each time step and takes account of the interaction between the rotor...

  11. Research on Splicing Method of Digital Relic Fragment Model

    Science.gov (United States)

    Yan, X.; Hu, Y.; Hou, M.

    2018-04-01

    In the course of archaeological excavation, a large number of pieces of cultural relics were unearthed, and the restoration of these fragments was done manually by traditional arts and crafts experts. In this process, cultural relics experts often try to splice the existing cultural relics, and then use adhesive to stick together the fragments of correct location, which will cause irreversible secondary damage to cultural relics. In order to minimize such damage, the surveyors combine 3D laser scanning with computer technology, and use the method of establishing digital cultural relics fragments model to make virtual splicing of cultural relics. The 3D software on the common market can basically achieve the model translation and rotation, using this two functions can be achieved manually splicing between models, mosaic records after the completion of the specific location of each piece of fragments, so as to effectively reduce the damage to the relics had tried splicing process.

  12. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  13. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  14. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  15. Multiscale modeling of porous ceramics using movable cellular automaton method

    Science.gov (United States)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  16. Applicability of deterministic methods in seismic site effects modeling

    International Nuclear Information System (INIS)

    Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.

    2005-01-01

    The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)

  17. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  18. Huffman and linear scanning methods with statistical language models.

    Science.gov (United States)

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  19. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    Science.gov (United States)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  20. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  1. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  2. Stable isotope separation in calutrons: Forty years of production and distribution

    International Nuclear Information System (INIS)

    Bell, W.A.; Tracy, J.G.

    1987-11-01

    The stable isotope separation program, established in 1945, has operated continually to provide enriched stable isotopes and selected radioactive isotopes, including the actinides, for use in research, medicine, and industrial applications. This report summarizes the first forty years of effort in the production and distribution of stable isotopes. Evolution of the program along with the research and development, chemical processing, and production efforts are highlighted. A total of 3.86 million separator hours has been utilized to separate 235 isotopes of 56 elements. Relative effort expended toward processing each of these elements is shown. Collection rates (mg/separator h), which vary by a factor of 20,000 from the highest to the lowest ( 205 Tl to 46 Ca), and the attainable isotopic purity for each isotope are presented. Policies related to isotope pricing, isotope distribution, and support for the enrichment program are discussed. Changes in government funding, coupled with large variations in sales revenue, have resulted in 7-fold perturbations in production levels

  3. Coronary artery calcification identified by CT in patients over forty years of age

    International Nuclear Information System (INIS)

    Woodring, J.H.; West, J.W.

    1989-01-01

    In a study of 100 unselected patients forty years of age or older, routine CT of the thorax demonstrated coronary artery calcification in 41%. Calcification of the left anterior descending was most common, occurring in 34%. For patients, sixty years of age and over, clinical evidence of coronary artery disease was 1.7 times more common in those with calcification compared to those without; however, for patients under 60, coronary artery disease was 5.5 times more common in those with calcification than those without. Because of the strong relationship which is known to exist between coronary artery calcification and coronary arteriosclerosis, we believe that the incidental discovery of coronary artery calcification on routine CT of the thorax has significance. All patients under 60 with coronary artery calcification discovered on CT should be investigated for hyperlipidemia if this has not been done, and, if they are not known to have a history of coronary artery disease, they should have a stress test and, if positive, arteriography may be warranted. 30 refs., 5 figs

  4. The Effectiveness of Hard Martial Arts in People over Forty: An Attempted Systematic Review

    Directory of Open Access Journals (Sweden)

    Gaby Pons van Dijk

    2014-04-01

    Full Text Available The objective was to assess the effect of hard martial arts on the physical fitness components such as balance, flexibility, gait, strength, cardiorespiratory function and several mental functions in people over forty. A computerized literature search was carried out. Studies were selected when they had an experimental design, the age of the study population was >40, one of the interventions was a hard martial art, and when at least balance and cardiorespiratory functions were used as an outcome measure. We included four studies, with, in total, 112 participants, aged between 51 and 93 years. The intervention consisted of Taekwondo or Karate. Total training duration varied from 17 to 234 h. All four studies reported beneficial effects, such as improvement in balance, in reaction tests, and in duration of single leg stance. We conclude that because of serious methodological shortcomings in all four studies, currently there is suggestive, but insufficient evidence, that hard martial arts practice improves physical fitness functions in healthy people over 40. However, considering the importance of such effects, and the low costs of the intervention, the potential of beneficial health effects of age-adapted, hard martial arts training, in people over 40, warrants further study.

  5. Sixty Days Remaining, Forty Years of CERN, Two Brothers, One Exclusive Interview

    CERN Multimedia

    2001-01-01

    Twins Marcel and Daniel Genolin while sharing memories of their CERN experiences, point out just how much smaller the Meyrin site once was. In a place such as CERN where the physical sciences are in many ways the essence of our daily lives and where technological advancement is an everyday occurrence, it is easy to lose track of the days, months, and even years. But last week twin brothers, Daniel and Marcel Genolin, hired in the early sixties and getting ready to end their eventful forty year CERN experiences, made it clear that the winds of time bluster past us whether we are aware or not. 'CERN was very small when we started' says Marcel, who has worked in transport during his entire time here. A lot has changed. 'When I got here there were no phones in peoples' houses' he recalls,'when there were problems in the control room with the PS (Proton Synchrotron) they used to get a megaphone and tell us {the transport service} to go and get the necessary physicists from their homes in the area. We had to lo...

  6. Cognition improvement in Taekwondo novices over forty. Results from the SEKWONDO Study.

    Directory of Open Access Journals (Sweden)

    Gaby ePons Van Dijk

    2013-11-01

    Full Text Available AbstractAge-related cognitive decline is associated with increased risk of disability, dementia and death. Recent studies suggest improvement in cognitive speed, attention and executive functioning with physical activity. However, whether such improvements are activity specific is unclear.Therefore, we aimed to study the effect of one year age-adapted Taekwondo training on several cognitive functions, including reaction/ motor time, information processing speed, and working and executive memory, in 24 healthy volunteers over forty.Reaction and motor time decreased with 41.2 seconds and 18.4 seconds (p=0.004, p=0.015, respectively. Digit symbol coding task improved with a mean of 3.7 digits (p=0.017. Digit span, letter fluency, and trail making test task-completion-time all improved, but not statistically significant. The questionnaire reported better reaction time in 10 and unchanged in 9 of the nineteen study compliers. In conclusion, our data suggest that age-adapted Taekwondo training improves various aspects of cognitive function in people over 40, which may, therefore, offer a cheap, safe and enjoyable way to mitigate age-related cognitive decline.

  7. Parathyroid autotransplantation in forty-four patients with primary hyperparathyroidism: the role of thallium scanning

    International Nuclear Information System (INIS)

    McCall, A.R.; Calandra, D.; Lawrence, A.M.; Henkin, R.; Paloyan, E.

    1986-01-01

    Forty-four patients with primary hyperparathyroidism were followed for 18 to 126 months after subtotal or total parathyroidectomy and parathyroid autotransplantation. Indications for autotransplantation included the devascularization of parathyroid glands during concomitant thyroid lobectomy or total thyroidectomy and the excision of the only remaining parathyroid tissue in patients with persistent hyperparathyroidism after previous unsuccessful parathyroidectomies. Before implantation, all parathyroid tissue was histologically evaluated by frozen-section light microscopy with hematoxylin and eosin stain. Fifteen patients had histologically normal implants; to date none of these patients have developed recurrent hyperparathyroidism. Twenty-nine patients had either adenomatous or hyperplastic parathyroid tissue used for implants; two of these patients developed graft-dependent recurrent hyperparathyroidism 4 and 7 years later. In both patients the grafts were preoperatively localized by thallium scanning and their resection restored eucalcemia. One hundred thirty-one patients from 11 series in the current literature had a cumulative incidence of 17.5% for presumed graft-dependent recurrence and a 9.2% incidence of graft excision followed by eucalcemia. In comparison, in the present series the incidence of graft-dependent recurrent hyperparathyroidism in patients with either adenomatous or hyperplastic implants stands at 6.9%. In contrast, in 15 patients with normal parathyroid tissue implants, the incidence was zero

  8. Modeling of Unsteady Flow through the Canals by Semiexact Method

    Directory of Open Access Journals (Sweden)

    Farshad Ehsani

    2014-01-01

    Full Text Available The study of free-surface and pressurized water flows in channels has many interesting application, one of the most important being the modeling of the phenomena in the area of natural water systems (rivers, estuaries as well as in that of man-made systems (canals, pipes. For the development of major river engineering projects, such as flood prevention and flood control, there is an essential need to have an instrument that be able to model and predict the consequences of any possible phenomenon on the environment and in particular the new hydraulic characteristics of the system. The basic equations expressing hydraulic principles were formulated in the 19th century by Barre de Saint Venant and Valentin Joseph Boussinesq. The original hydraulic model of the Saint Venant equations is written in the form of a system of two partial differential equations and it is derived under the assumption that the flow is one-dimensional, the cross-sectional velocity is uniform, the streamline curvature is small and the pressure distribution is hydrostatic. The St. Venant equations must be solved with continuity equation at the same time. Until now no analytical solution for Saint Venant equations is presented. In this paper the Saint Venant equations and continuity equation are solved with homotopy perturbation method (HPM and comparison by explicit forward finite difference method (FDM. For decreasing the present error between HPM and FDM, the st.venant equations and continuity equation are solved by HAM. The homotopy analysis method (HAM contains the auxiliary parameter ħ that allows us to adjust and control the convergence region of solution series. The study has highlighted the efficiency and capability of HAM in solving Saint Venant equations and modeling of unsteady flow through the rectangular canal that is the goal of this paper and other kinds of canals.

  9. Microstrip natural wave spectrum mathematical model using partial inversion method

    International Nuclear Information System (INIS)

    Pogarsky, S.A.; Litvinenko, L.N.; Prosvirnin, S.L.

    1995-01-01

    It is generally agreed that both microstrip lines itself and different discontinuities based on microstrips are the most difficult problem for accurate electrodynamic analysis. Over the last years much has been published about principles and accurate (or full wave) methods of microstrip lines investigations. The growing interest for this problem may be explained by the microstrip application in the millimeter-wave range for purpose of realizing interconnects and a variety of passive components. At these higher operating rating frequencies accurate component modeling becomes more critical. A creation, examination and experimental verification of the accurate method for planar electrodynamical structures natural wave spectrum investigations are the objects of this manuscript. The moment method with partial inversion operator method using may be considered as a basical way for solving this problem. This method is outlook for accurate analysis of different planar discontinuities in microstrip: such as step discontinuities, microstrip turns, Y- and X-junctions and etc., substrate space steps dielectric constants and other anisotropy types

  10. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  11. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    Science.gov (United States)

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  12. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  13. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  14. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  15. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  16. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  17. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    Science.gov (United States)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  18. Modelling of complex heat transfer systems by the coupling method

    Energy Technology Data Exchange (ETDEWEB)

    Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))

    1985-04-01

    The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.

  19. Modeling patient safety incidents knowledge with the Categorial Structure method.

    Science.gov (United States)

    Souvignet, Julien; Bousquet, Cédric; Lewalle, Pierre; Trombert-Paviot, Béatrice; Rodrigues, Jean Marie

    2011-01-01

    Following the WHO initiative named World Alliance for Patient Safety (PS) launched in 2004 a conceptual framework developed by PS national reporting experts has summarized the knowledge available. As a second step, the Department of Public Health of the University of Saint Etienne team elaborated a Categorial Structure (a semi formal structure not related to an upper level ontology) identifying the elements of the semantic structure underpinning the broad concepts contained in the framework for patient safety. This knowledge engineering method has been developed to enable modeling patient safety information as a prerequisite for subsequent full ontology development. The present article describes the semantic dissection of the concepts, the elicitation of the ontology requirements and the domain constraints of the conceptual framework. This ontology includes 134 concepts and 25 distinct relations and will serve as basis for an Information Model for Patient Safety.

  20. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  1. Use of results from microscopic methods in optical model calculations

    International Nuclear Information System (INIS)

    Lagrange, C.

    1985-11-01

    A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr

  2. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Modelling a gamma irradiation process using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  4. Direct numerical methods of mathematical modeling in mechanical structural design

    International Nuclear Information System (INIS)

    Sahili, Jihad; Verchery, Georges; Ghaddar, Ahmad; Zoaeter, Mohamed

    2002-01-01

    Full text.Structural design and numerical methods are generally interactive; requiring optimization procedures as the structure is analyzed. This analysis leads to define some mathematical terms, as the stiffness matrix, which are resulting from the modeling and then used in numerical techniques during the dimensioning procedure. These techniques and many others involve the calculation of the generalized inverse of the stiffness matrix, called also the 'compliance matrix'. The aim of this paper is to introduce first, some different existing mathematical procedures, used to calculate the compliance matrix from the stiffness matrix, then apply direct numerical methods to solve the obtained system with the lowest computational time, and to compare the obtained results. The results show a big difference of the computational time between the different procedures

  5. Nuclear fuel cycle optimization - methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)

  6. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  7. Modelling a gamma irradiation process using the Monte Carlo method

    International Nuclear Information System (INIS)

    Soares, Gabriela A.; Pereira, Marcio T.

    2011-01-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  8. How to find home backwards? Navigation during rearward homing of Cataglyphis fortis desert ants.

    Science.gov (United States)

    Pfeffer, Sarah E; Wittlinger, Matthias

    2016-07-15

    Cataglyphis ants are renowned for their impressive navigation skills, which have been studied in numerous experiments during forward locomotion. However, the ants' navigational performance during backward homing when dragging large food loads has not been investigated until now. During backward locomotion, the odometer has to deal with unsteady motion and irregularities in inter-leg coordination. The legs' sensory feedback during backward walking is not just a simple reversal of the forward stepping movements: compared with forward homing, ants are facing towards the opposite direction during backward dragging. Hence, the compass system has to cope with a flipped celestial view (in terms of the polarization pattern and the position of the sun) and an inverted retinotopic image of the visual panorama and landmark environment. The same is true for wind and olfactory cues. In this study we analyze for the first time backward-homing ants and evaluate their navigational performance in channel and open field experiments. Backward-homing Cataglyphis fortis desert ants show remarkable similarities in the performance of homing compared with forward-walking ants. Despite the numerous challenges emerging for the navigational system during backward walking, we show that ants perform quite well in our experiments. Direction and distance gauging was comparable to that of the forward-walking control groups. Interestingly, we found that backward-homing ants often put down the food item and performed foodless search loops around the left food item. These search loops were mainly centred around the drop-off position (and not around the nest position), and increased in length the closer the ants came to their fictive nest site. © 2016. Published by The Company of Biologists Ltd.

  9. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  10. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  11. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  12. A Probabilistic Recommendation Method Inspired by Latent Dirichlet Allocation Model

    Directory of Open Access Journals (Sweden)

    WenBo Xie

    2014-01-01

    Full Text Available The recent decade has witnessed an increasing popularity of recommendation systems, which help users acquire relevant knowledge, commodities, and services from an overwhelming information ocean on the Internet. Latent Dirichlet Allocation (LDA, originally presented as a graphical model for text topic discovery, now has found its application in many other disciplines. In this paper, we propose an LDA-inspired probabilistic recommendation method by taking the user-item collecting behavior as a two-step process: every user first becomes a member of one latent user-group at a certain probability and each user-group will then collect various items with different probabilities. Gibbs sampling is employed to approximate all the probabilities in the two-step process. The experiment results on three real-world data sets MovieLens, Netflix, and Last.fm show that our method exhibits a competitive performance on precision, coverage, and diversity in comparison with the other four typical recommendation methods. Moreover, we present an approximate strategy to reduce the computing complexity of our method with a slight degradation of the performance.

  13. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  14. A method for increase abrasive wear resistance parts by obtaining on methods casting on gasifying models

    Science.gov (United States)

    Sedukhin, V. V.; Anikeev, A. N.; Chumanov, I. V.

    2017-11-01

    Method optimizes hardening working layer parts’, working in high-abrasive conditions looks in this work: bland refractory particles WC and TiC in respect of 70/30 wt. % prepared by beforehand is applied on polystyrene model in casting’ mould. After metal poured in mould, withstand for crystallization, and then a study is carried out. Study macro- and microstructure received samples allows to say that thickness and structure received hardened layer depends on duration interactions blend harder carbides and liquid metal. Different character interactions various dispersed particles and matrix metal observed under the same conditions. Tests abrasive wear resistance received materials of method calculating residual masses was conducted in laboratory’ conditions. Results research wear resistance showed about that method obtaining harder coating of blend carbide tungsten and carbide titanium by means of drawing on surface foam polystyrene model before moulding, allows receive details with surface has wear resistance in 2.5 times higher, than details of analogy steel uncoated. Wherein energy costs necessary for transformation units mass’ substances in powder at obtained harder layer in 2.06 times higher, than materials uncoated.

  15. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  16. Modeling granular phosphor screens by Monte Carlo methods

    International Nuclear Information System (INIS)

    Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.

    2006-01-01

    The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd 2 O 2 S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd 2 O 2 S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd 2 O 2 S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)

  17. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  18. Tail modeling in a stretched magnetosphere 1. Methods and transformations

    International Nuclear Information System (INIS)

    Stern, D.P.

    1987-01-01

    A new method is developed for representing the magnetospheric field B as a distorted dipole field. Because delxB = 0 must be maintained,such a distortion may be viewed as a transformation of the vector potential A. The simplest form is a one-dimensional ''stretch transformation'' along the x axis, a generalization of a method introduced by Voigt. The transformation is concisely represented by the ''stretch function'' f(x), which is also a convenient tool for representing features of the substorm cycle. Onedimensional stretch transformations are extended to spherical, cylindrical, and parabolic coordinates and then to arbitrary coordinates. It is next shown that distortion transformations can be viewed as mappings of field lines from one pattern to another: Euler potentials are used in the derivation, but the final result only requires knowledge of the field and not of the potentials. General transformations in Cartesian and arbitrary coordinates are then derived,and applications to field modeling, field line motion, MHD modeling, and incompressible fluid dynamics are considered. copyrightAmerican Geophysical Union 1987

  19. Three dimensional wavefield modeling using the pseudospectral method; Pseudospectral ho ni yoru sanjigen hadoba modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.

  20. Multinomial Response Models, for Modeling and Determining Important Factors in Different Contraceptive Methods in Women

    Directory of Open Access Journals (Sweden)

    E Haji Nejad

    2001-06-01

    Full Text Available Difference aspects of multinomial statistical modelings and its classifications has been studied so far. In these type of problems Y is the qualitative random variable with T possible states which are considered as classifications. The goal is prediction of Y based on a random Vector X ? IR^m. Many methods for analyzing these problems were considered. One of the modern and general method of classification is Classification and Regression Trees (CART. Another method is recursive partitioning techniques which has a strange relationship with nonparametric regression. Classical discriminant analysis is a standard method for analyzing these type of data. Flexible discriminant analysis method which is a combination of nonparametric regression and discriminant analysis and classification using spline that includes least square regression and additive cubic splines. Neural network is an advanced statistical method for analyzing these types of data. In this paper properties of multinomial logistics regression were investigated and this method was used for modeling effective factors in selecting contraceptive methods in Ghom province for married women age 15-49. The response variable has a tetranomial distibution. The levels of this variable are: nothing, pills, traditional and a collection of other contraceptive methods. A collection of significant independent variables were: place, age of women, education, history of pregnancy and family size. Menstruation age and age at marriage were not statistically significant.

  1. A copula method for modeling directional dependence of genes

    Directory of Open Access Journals (Sweden)

    Park Changyi

    2008-05-01

    Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared

  2. Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.

    Science.gov (United States)

    Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok

    2018-06-01

    To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.

  3. [Analytic methods for seed models with genotype x environment interactions].

    Science.gov (United States)

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by

  4. Space Environment Modelling with the Use of Artificial Intelligence Methods

    Science.gov (United States)

    Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.

    1996-12-01

    Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore

  5. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  6. Modeling of NiTiHf using finite difference method

    Science.gov (United States)

    Farjam, Nazanin; Mehrabi, Reza; Karaca, Haluk; Mirzaeifar, Reza; Elahinia, Mohammad

    2018-03-01

    NiTiHf is a high temperature and high strength shape memory alloy with transformation temperatures above 100oC. A constitutive model based on Gibbs free energy is developed to predict the behavior of this material. Two different irrecoverable strains including transformation induced plastic strain (TRIP) and viscoplastic strain (VP) are considered when using high temperature shape memory alloys (HTSMAs). The first one happens during transformation at high levels of stress and the second one is related to the creep which is rate-dependent. The developed model is implemented for NiTiHf under uniaxial loading. Finite difference method is utilized to solve the proposed equations. The material parameters in the equations are calibrated from experimental data. Simulation results are captured to investigate the superelastic behavior of NiTiHf. The extracted results are compared with experimental tests of isobaric heating and cooling at different levels of stress and also superelastic tests at different levels of temperature. More results are generated to investigate the capability of the proposed model in the prediction of the irrecoverable strain after full transformation in HTSMAs.

  7. Mass Spectrometry Coupled Experiments and Protein Structure Modeling Methods

    Directory of Open Access Journals (Sweden)

    Lee Sael

    2013-10-01

    Full Text Available With the accumulation of next generation sequencing data, there is increasing interest in the study of intra-species difference in molecular biology, especially in relation to disease analysis. Furthermore, the dynamics of the protein is being identified as a critical factor in its function. Although accuracy of protein structure prediction methods is high, provided there are structural templates, most methods are still insensitive to amino-acid differences at critical points that may change the overall structure. Also, predicted structures are inherently static and do not provide information about structural change over time. It is challenging to address the sensitivity and the dynamics by computational structure predictions alone. However, with the fast development of diverse mass spectrometry coupled experiments, low-resolution but fast and sensitive structural information can be obtained. This information can then be integrated into the structure prediction process to further improve the sensitivity and address the dynamics of the protein structures. For this purpose, this article focuses on reviewing two aspects: the types of mass spectrometry coupled experiments and structural data that are obtainable through those experiments; and the structure prediction methods that can utilize these data as constraints. Also, short review of current efforts in integrating experimental data in the structural modeling is provided.

  8. A robust absorbing layer method for anisotropic seismic wave modeling

    Energy Technology Data Exchange (ETDEWEB)

    Métivier, L., E-mail: ludovic.metivier@ujf-grenoble.fr [LJK, CNRS, Université de Grenoble, BP 53, 38041 Grenoble Cedex 09 (France); ISTerre, Université de Grenoble I, BP 53, 38041 Grenoble Cedex 09 (France); Brossier, R. [ISTerre, Université de Grenoble I, BP 53, 38041 Grenoble Cedex 09 (France); Labbé, S. [LJK, CNRS, Université de Grenoble, BP 53, 38041 Grenoble Cedex 09 (France); Operto, S. [Géoazur, Université de Nice Sophia-Antipolis, CNRS, IRD, OCA, Villefranche-sur-Mer (France); Virieux, J. [ISTerre, Université de Grenoble I, BP 53, 38041 Grenoble Cedex 09 (France)

    2014-12-15

    When applied to wave propagation modeling in anisotropic media, Perfectly Matched Layers (PML) exhibit instabilities. Incoming waves are amplified instead of being absorbed. Overcoming this difficulty is crucial as in many seismic imaging applications, accounting accurately for the subsurface anisotropy is mandatory. In this study, we present the SMART layer method as an alternative to PML approach. This method is based on the decomposition of the wavefield into components propagating inward and outward the domain of interest. Only outgoing components are damped. We show that for elastic and acoustic wave propagation in Transverse Isotropic media, the SMART layer is unconditionally dissipative: no amplification of the wavefield is possible. The SMART layers are not perfectly matched, therefore less accurate than conventional PML. However, a reasonable increase of the layer size yields an accuracy similar to PML. Finally, we illustrate that the selective damping strategy on which is based the SMART method can prevent the generation of spurious S-waves by embedding the source in a small zone where only S-waves are damped.

  9. A robust absorbing layer method for anisotropic seismic wave modeling

    International Nuclear Information System (INIS)

    Métivier, L.; Brossier, R.; Labbé, S.; Operto, S.; Virieux, J.

    2014-01-01

    When applied to wave propagation modeling in anisotropic media, Perfectly Matched Layers (PML) exhibit instabilities. Incoming waves are amplified instead of being absorbed. Overcoming this difficulty is crucial as in many seismic imaging applications, accounting accurately for the subsurface anisotropy is mandatory. In this study, we present the SMART layer method as an alternative to PML approach. This method is based on the decomposition of the wavefield into components propagating inward and outward the domain of interest. Only outgoing components are damped. We show that for elastic and acoustic wave propagation in Transverse Isotropic media, the SMART layer is unconditionally dissipative: no amplification of the wavefield is possible. The SMART layers are not perfectly matched, therefore less accurate than conventional PML. However, a reasonable increase of the layer size yields an accuracy similar to PML. Finally, we illustrate that the selective damping strategy on which is based the SMART method can prevent the generation of spurious S-waves by embedding the source in a small zone where only S-waves are damped

  10. USA: OSTI Joins In Celebrating the Forty-Fifth Anniversary of INIS

    International Nuclear Information System (INIS)

    Cutler, Debbie

    2015-01-01

    Forty-five years ago, nations around the world saw their dream for a more efficient way to share nuclear-related information reach fruition through the creation of a formal international collaboration. This was accomplished without the Internet, email, or websites. It was the right thing to do for public safety, education, and the further advancement of science. It was also a necessary way forward as the volume of research and information about nuclear-related science, even back then, was skyrocketing and exceeded the capacity for any one country to go it alone. And the Department of Energy (DOE) Office of Scientific and Technical Information (OSTI) was part of the collaboration from its initial planning stages. The International Nuclear Information System, or INIS, as it is commonly known, was approved by the Governing Board of the United Nations’ International Atomic Energy Agency (IAEA) in 1969 and began operations in 1970. The primary purpose of INIS was, and still is, to collect and share information about the peaceful uses of nuclear science and technology, with participating nations sharing efforts to build a centralized resource. OSTI grew out of the United States’ post-World War II initiative to make the scientific research of the Manhattan Project as freely available to the public as possible. Thus, OSTI had been building the premier Nuclear Science Abstracts (NSA) publication since the late 1940s and was perfectly positioned to provide information gathering and organizing expertise to help the INIS concept coalesce into reality. OSTI was a key player in formative working group discussions at the IAEA in Vienna, Austria in the 1966-67 timeframe, and led many of the subsequent discussions and teams that finalized INIS policy guidance, common exchange formats, and more. To this day, OSTI has continued to represent the U.S. as the official INIS Liaison Officer (ILO) organization, contributing database content, helping disseminate INIS content more widely

  11. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  12. Theoretical Modelling Methods for Thermal Management of Batteries

    Directory of Open Access Journals (Sweden)

    Bahman Shabani

    2015-09-01

    Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.

  13. A novel method to establish a rat ED model using internal iliac artery ligation combined with hyperlipidemia.

    Directory of Open Access Journals (Sweden)

    Chao Hu

    Full Text Available OBJECTIVE: To investigate a novel method, namely using bilateral internal iliac artery ligation combined with a high-fat diet (BCH, for establishing a rat model of erectile dysfunction (ED that, compared to classical approaches, more closely mimics the chronic pathophysiology of human ED after acute ischemic insult. MATERIALS AND METHODS: Forty 4-month-old male Sprague Dawley rats were randomly placed into five groups (n = 8 per group: normal control (NC, bilateral internal iliac artery ligation (BIIAL, high-fat diet (HFD, BCH, and mock surgery (MS. All rats were induced for 12 weeks. Copulatory behavior, intracavernosal pressure (ICP, ICP/mean arterial pressure, hematoxylin-eosin staining, Masson's trichrome staining, serum lipid levels, and endothelial and neuronal nitric oxide synthase immunohistochemical staining of the cavernous smooth muscle and endothelium were assessed. Data were analyzed by SAS 8.0 for Windows. RESULTS: Serum total cholesterol and triglyceride levels were significantly higher in the HFD and BCH groups than the NC and MS groups. High density lipoprotein levels were significantly lower in the HFD and BCH groups than the NC and MS groups. The ICP values and mount and intromission numbers were significantly lower in the BIIAL, HFD, and BCH groups than in the NC and MS groups. ICP was significantly lower in the BCH group than in the BIIAL and HFD groups. Cavernous smooth muscle and endothelial damage increased in the HFD and BCH groups. Cavernous smooth muscle to collagen ratio, nNOS and eNOS staining decreased significantly in the BIIAL, HFD, and BCH groups compared to the NC and MS groups. CONCLUSIONS: The novel BCH model mimics the chronic pathophysiology of ED in humans and avoids the drawbacks of traditional ED models.

  14. Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Obuchi, T; Saito, T [Iwate University, Iwate (Japan). Faculty of Engineering

    1996-05-01

    Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.

  15. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  16. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  17. Engineering models and methods for industrial cell control

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk; Alting, Leo

    1997-01-01

    This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based...... SHIPYARD.It is concluded that cell control technology provides for increased performance in production systems, and that the Cell Control Engineering concept reduces the effort for providing and operating high quality and high functionality cell control solutions for the industry....... control and monitor-ing systems for production cells. The project participants are The Danish Academy of Technical Sciences, the Institute of Manufacturing Engineering at the Technical University of Denmark and ODENSE STEEL SHIPYARD Ltd.The manufacturing environment and the current practice...

  18. Methods of Modelling Marketing Activity on Software Sales

    Directory of Open Access Journals (Sweden)

    Bashirov Islam H.

    2013-11-01

    Full Text Available The article studies a topical issue of development of methods of modelling marketing activity on software sales for achievement of efficient functioning of an enterprise. On the basis of analysis of the market type for the studied CloudLinux OS product, the article identifies the market structure type: monopolistic competition. To ensure the information basis of the marketing activity in the target market segment, the article offers the survey method. The article provides a questionnaire, which contains specific questions regarding the studied market segment of hosting services, for an online survey with the help of the Survio service. In accordance with the system approach the CloudLinux OS has properties of systems, namely, diversity. Economic differences are non-price indicators that have no numeric expression and are quality descriptions. Analysis of the market and the conducted survey allow obtaining them. Combination of price and non-price indicators provides a complete description of the product properties. To calculate an integral indicator of competitiveness the article offers to apply a model, which is based on the direct algebraic addition of weight measures of individual indicators, regulation of formalised indicators and use of the mechanism of fuzzy sets for identification of non-formalised indicators. The calculated indicator allows not only assessment of the current level of competitiveness, but also identification of influence of changes of various indicators, which allows increase of efficiency of marketing decisions. Also, having identified the target customers of hosting OS and formalised non-price parameters, it is possible to conduct the search for a set of optimal characteristics of the product. In the result an optimal strategy of the product advancement to the market is formed.

  19. Non linear permanent magnets modelling with the finite element method

    International Nuclear Information System (INIS)

    Chavanne, J.; Meunier, G.; Sabonnadiere, J.C.

    1989-01-01

    In order to perform the calculation of permanent magnets with the finite element method, it is necessary to take into account the anisotropic behaviour of hard magnetic materials (Ferrites, NdFeB, SmCo5). In linear cases, the permeability of permanent magnets is a tensor. This one is fully described with the permeabilities parallel and perpendicular to the easy axis of the magnet. In non linear cases, the model uses a texture function which represents the distribution of the local easy axis of the cristallytes of the magnet. This function allows a good representation of the angular dependance of the coercitive field of the magnet. As a result, it is possible to express the magnetic induction B and the tensor as functions of the field and the texture parameter. This model has been implemented in the software FLUX3D where the tensor is used for the Newton-Raphson procedure. 3D demagnetization of a ferrite magnet by a NdFeB magnet is a suitable representative example. They analyze the results obtained for an ideally oriented ferrite magnet and a real one using a measured texture parameter

  20. Hybrid CMS methods with model reduction for assembly of structures

    Science.gov (United States)

    Farhat, Charbel

    1991-01-01

    Future on-orbit structures will be designed and built in several stages, each with specific control requirements. Therefore there must be a methodology which can predict the dynamic characteristics of the assembled structure, based on the dynamic characteristics of the subassemblies and their interfaces. The methodology developed by CSC to address this issue is Hybrid Component Mode Synthesis (HCMS). HCMS distinguishes itself from standard component mode synthesis algorithms in the following features: (1) it does not require the subcomponents to have displacement compatible models, which makes it ideal for analyzing the deployment of heterogeneous flexible multibody systems, (2) it incorporates a second-level model reduction scheme at the interface, which makes it much faster than other algorithms and therefore suitable for control purposes, and (3) it does answer specific questions such as 'how does the global fundamental frequency vary if I change the physical parameters of substructure k by a specified amount?'. Because it is based on an energy principle rather than displacement compatibility, this methodology can also help the designer to define an assembly process. Current and future efforts are devoted to applying the HCMS method to design and analyze docking and berthing procedures in orbital construction.

  1. Preequilibrium decay models and the quantum Green function method

    International Nuclear Information System (INIS)

    Zhivopistsev, F.A.; Rzhevskij, E.S.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow. Inst. Teoreticheskoj i Ehksperimental'noj Fiziki)

    1977-01-01

    The nuclear process mechanism and preequilibrium decay involving complex particles are expounded on the basis of the Green function formalism without the weak interaction assumptions. The Green function method is generalized to a general nuclear reaction: A+α → B+β+γ+...rho, where A is the target nucleus, α is a complex particle in the initial state, B is the final nucleus, and β, γ, ... rho are nuclear fragments in the final state. The relationship between the generalized Green function and Ssub(fi)-matrix is established. The resultant equations account for: 1) direct and quasi-direct processes responsible for the angular distribution asymmetry of the preequilibrium component; 2) the appearance of addends corresponding to the excitation of complex states of final nucleus; and 3) the relationship between the preequilibrium decay model and the general models of nuclear reaction theories (Lippman-Schwinger formalism). The formulation of preequilibrium emission via the S(T) matrix allows to account for all the differential terms in succession important to an investigation of the angular distribution assymetry of emitted particles

  2. Three-Component Forward Modeling for Transient Electromagnetic Method

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2010-01-01

    Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.

  3. Modeling local extinction in turbulent combustion using an embedding method

    Science.gov (United States)

    Knaus, Robert; Pantano, Carlos

    2012-11-01

    Local regions of extinction in diffusion flames, called ``flame holes,'' can reduce the efficiency of combustion and increase the production of certain pollutants. At sufficiently high speeds, a flame may also be lifted from the rim of the burner to a downstream location that may be stable. These two phenomena share a common underlying mechanism of propagation related to edge-flame dynamics where chemistry and fluid mechanics are equally important. We present a formulation that describes the formation, propagation, and growth of flames holes on the stoichiometric surface using edge flame dynamics. The boundary separating the flame from the quenched region is modeled using a progress variable defined on the moving stoichiometric surface that is embedded in the three-dimensional space using an extension algorithm. This Cartesian problem is solved using a high-order finite-volume WENO method extended to this nonconservative problem. This algorithm can track the dynamics of flame holes in a turbulent reacting-shear layer and model flame liftoff without requiring full chemistry calculations.

  4. Biologic data, models, and dosimetric methods for internal emitters

    International Nuclear Information System (INIS)

    Weber, D.A.

    1990-01-01

    The absorbed radiation dose from internal emitters has been and will remain a pivotal factor in assessing risk and therapeutic utility in selecting radiopharmaceuticals for diagnosis and treatment. Although direct measurements of absorbed dose and dose distributions in vivo have been and will continue to be made in limited situations, the measurement of the biodistribution and clearance of radiopharmaceuticals in human subjects and the use of this data is likely to remain the primary means to approach the calculation and estimation of absorbed dose from internal emitters over the next decade. Since several approximations are used in these schema to calculate dose, attention must be given to inspecting and improving the application of this dosimetric method as better techniques are developed to assay body activity and as more experience is gained in applying these schema to calculating absorbed dose. Discussion of the need for considering small scale dosimetry to calculate absorbed dose at the cellular level will be presented in this paper. Other topics include dose estimates for internal emitters, biologic data mathematical models and dosimetric methods employed. 44 refs

  5. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  6. Methods for MHC genotyping in non-model vertebrates.

    Science.gov (United States)

    Babik, W

    2010-03-01

    Genes of the major histocompatibility complex (MHC) are considered a paradigm of adaptive evolution at the molecular level and as such are frequently investigated by evolutionary biologists and ecologists. Accurate genotyping is essential for understanding of the role that MHC variation plays in natural populations, but may be extremely challenging. Here, I discuss the DNA-based methods currently used for genotyping MHC in non-model vertebrates, as well as techniques likely to find widespread use in the future. I also highlight the aspects of MHC structure that are relevant for genotyping, and detail the challenges posed by the complex genomic organization and high sequence variation of MHC loci. Special emphasis is placed on designing appropriate PCR primers, accounting for artefacts and the problem of genotyping alleles from multiple, co-amplifying loci, a strategy which is frequently necessary due to the structure of the MHC. The suitability of typing techniques is compared in various research situations, strategies for efficient genotyping are discussed and areas of likely progress in future are identified. This review addresses the well established typing methods such as the Single Strand Conformation Polymorphism (SSCP), Denaturing Gradient Gel Electrophoresis (DGGE), Reference Strand Conformational Analysis (RSCA) and cloning of PCR products. In addition, it includes the intriguing possibility of direct amplicon sequencing followed by the computational inference of alleles and also next generation sequencing (NGS) technologies; the latter technique may, in the future, find widespread use in typing complex multilocus MHC systems. © 2009 Blackwell Publishing Ltd.

  7. Comparison of parametric methods for modeling corneal surfaces

    Science.gov (United States)

    Bouazizi, Hala; Brunette, Isabelle; Meunier, Jean

    2017-02-01

    Corneal topography is a medical imaging technique to get the 3D shape of the cornea as a set of 3D points of its anterior and posterior surfaces. From these data, topographic maps can be derived to assist the ophthalmologist in the diagnosis of disorders. In this paper, we compare three different mathematical parametric representations of the corneal surfaces leastsquares fitted to the data provided by corneal topography. The parameters obtained from these models reduce the dimensionality of the data from several thousand 3D points to only a few parameters and could eventually be useful for diagnosis, biometry, implant design etc. The first representation is based on Zernike polynomials that are commonly used in optics. A variant of these polynomials, named Bhatia-Wolf will also be investigated. These two sets of polynomials are defined over a circular domain which is convenient to model the elevation (height) of the corneal surface. The third representation uses Spherical Harmonics that are particularly well suited for nearly-spherical object modeling, which is the case for cornea. We compared the three methods using the following three criteria: the root-mean-square error (RMSE), the number of parameters and the visual accuracy of the reconstructed topographic maps. A large dataset of more than 2000 corneal topographies was used. Our results showed that Spherical Harmonics were superior with a RMSE mean lower than 2.5 microns with 36 coefficients (order 5) for normal corneas and lower than 5 microns for two diseases affecting the corneal shapes: keratoconus and Fuchs' dystrophy.

  8. Nonperturbative stochastic method for driven spin-boson model

    Science.gov (United States)

    Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn

    2013-01-01

    We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.

  9. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  10. a Range Based Method for Complex Facade Modeling

    Science.gov (United States)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  11. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    homogeneous point cloud of the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel is displaced according the value of gray (= distance from the plane. This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  12. Studies on sulfate attack: Mechanisms, test methods, and modeling

    Science.gov (United States)

    Santhanam, Manu

    The objective of this research study was to investigate various issues pertaining to the mechanism, testing methods, and modeling of sulfate attack in concrete. The study was divided into the following segments: (1) effect of gypsum formation on the expansion of mortars, (2) attack by the magnesium ion, (3) sulfate attack in the presence of chloride ions---differentiating seawater and groundwater attack, (4) use of admixtures to mitigate sulfate attack---entrained air, sodium citrate, silica fume, and metakaolin, (5) effects of temperature and concentration of the attack solution, (6) development of new test methods using concrete specimens, and (7) modeling of the sulfate attack phenomenon. Mortar specimens using portland cement (PC) and tricalcium silicate (C 3S), with or without mineral admixtures, were prepared and immersed in different sulfate solutions. In addition to this, portland cement concrete specimens were also prepared and subjected to complete and partial immersion in sulfate solutions. Physical measurements, chemical analyses and microstructural studies were performed periodically on the specimens. Gypsum formation was seen to cause expansion of the C3S mortar specimens. Statistical analyses of the data also indicated that the quantity of gypsum was the most significant factor controlling the expansion of mortar bars. The attack by magnesium ion was found to drive the reaction towards the formation of brucite. Decalcification of the C-S-H and its subsequent conversion to the non-cementitious M-S-H was identified as the mechanism of destruction in magnesium sulfate attack. Mineral admixtures were beneficial in combating sodium sulfate attack, while reducing the resistance to magnesium sulfate attack. Air entrainment did not change the measured physical properties, but reduced the visible distress of the mortars. Sodium citrate caused a substantial reduction in the rate of damage of the mortars due to its retarding effect. Temperature and

  13. Muhammed b. Mahmud Jamal al-Din al-Halwatî and His Manuscript Titled The annotation of forty hadith qudsy

    Directory of Open Access Journals (Sweden)

    Harun Reşit DEMİREL

    2017-12-01

    Full Text Available We don’t unfortunately have information about the birth and education of Jamal al-Din Aksarayī who lived in the XV century in the Ottoman State. However, the works shows that he was a very knowledgeable person. Al Aksarayi, who worked as a mudarris in Zinciriyye Madrasah, was capable of hadith, interpretation, jurisprudence, Arabic language and medicine. He has some works as: Risâle fî Hadiths “Innallāha Taālā khalaka âdama alā suretihi”, Risāletu’r-Rahīmiyye, Sharh Forty Hadīths, Forthy Hadīth. Aksarayi’s work titled Forty Hadīth and interpreted by him with a mystical method and currently located as a manuscript in libraries will be studied in this article. And we will try to demonstrate his scientific personality by showing the sources of hadiths which he used and examining his method.

  14. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    Science.gov (United States)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  15. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  16. Data Mining Methods to Generate Severe Wind Gust Models

    Directory of Open Access Journals (Sweden)

    Subana Shanmuganathan

    2014-01-01

    Full Text Available Gaining knowledge on weather patterns, trends and the influence of their extremes on various crop production yields and quality continues to be a quest by scientists, agriculturists, and managers. Precise and timely information aids decision-making, which is widely accepted as intrinsically necessary for increased production and improved quality. Studies in this research domain, especially those related to data mining and interpretation are being carried out by the authors and their colleagues. Some of this work that relates to data definition, description, analysis, and modelling is described in this paper. This includes studies that have evaluated extreme dry/wet weather events against reported yield at different scales in general. They indicate the effects of weather extremes such as prolonged high temperatures, heavy rainfall, and severe wind gusts. Occurrences of these events are among the main weather extremes that impact on many crops worldwide. Wind gusts are difficult to anticipate due to their rapid manifestation and yet can have catastrophic effects on crops and buildings. This paper examines the use of data mining methods to reveal patterns in the weather conditions, such as time of the day, month of the year, wind direction, speed, and severity using a data set from a single location. Case study data is used to provide examples of how the methods used can elicit meaningful information and depict it in a fashion usable for management decision making. Historical weather data acquired between 2008 and 2012 has been used for this study from telemetry devices installed in a vineyard in the north of New Zealand. The results show that using data mining techniques and the local weather conditions, such as relative pressure, temperature, wind direction and speed recorded at irregular intervals, can produce new knowledge relating to wind gust patterns for vineyard management decision making.

  17. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  18. Detection of Internal Short Circuit in Lithium Ion Battery Using Model-Based Switching Model Method

    Directory of Open Access Journals (Sweden)

    Minhwan Seo

    2017-01-01

    Full Text Available Early detection of an internal short circuit (ISCr in a Li-ion battery can prevent it from undergoing thermal runaway, and thereby ensure battery safety. In this paper, a model-based switching model method (SMM is proposed to detect the ISCr in the Li-ion battery. The SMM updates the model of the Li-ion battery with ISCr to improve the accuracy of ISCr resistance R I S C f estimates. The open circuit voltage (OCV and the state of charge (SOC are estimated by applying the equivalent circuit model, and by using the recursive least squares algorithm and the relation between OCV and SOC. As a fault index, the R I S C f is estimated from the estimated OCVs and SOCs to detect the ISCr, and used to update the model; this process yields accurate estimates of OCV and R I S C f . Then the next R I S C f is estimated and used to update the model iteratively. Simulation data from a MATLAB/Simulink model and experimental data verify that this algorithm shows high accuracy of R I S C f estimates to detect the ISCr, thereby helping the battery management system to fulfill early detection of the ISCr.

  19. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  20. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  1. Neural node network and model, and method of teaching same

    Science.gov (United States)

    Parlos, Alexander G. (Inventor); Atiya, Amir F. (Inventor); Fernandez, Benito (Inventor); Tsai, Wei K. (Inventor); Chong, Kil T. (Inventor)

    1995-01-01

    The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.

  2. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  3. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. From micro data to causality: Forty years of empirical labor economics

    NARCIS (Netherlands)

    van der Klaauw, B.

    2014-01-01

    This overview describes the development of methods for empirical research in the field of labor economics during the past four decades. This period is characterized by the use of micro data to answer policy relevant research question. Prominent in the literature is the search for exogenous variation

  5. Forty Cases of Insomnia Treated by Multi-output Electric Pulsation and Auricular Plaster Therapy

    Institute of Scientific and Technical Information of China (English)

    Liu Weizhe

    2007-01-01

    @@ The writer has treated 40 cases of insomnia by the method of multi-output electric pulsation in combination with auricular plaster therapy (with a seed of Vaccariae segetalis 王不留行 taped tightly to a particular ear point and pressed) and received satisfactory therapeutic effects. A report follows.

  6. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....

  7. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary

  8. Power systems with nuclear-electric generators - Modelling methods

    International Nuclear Information System (INIS)

    Valeca, Serban Constantin

    2002-01-01

    This is a vast analysis on the issue of sustainable nuclear power development with direct conclusions regarding the Nuclear Programme of Romania. The work is targeting specialists and decision making boards. Specific to the nuclear power development is its public implication, the public being most often misinformed by non-professional media. The following problems are debated thoroughly: - safety, nuclear risk, respectively, is treated in chapter 1 and 7 aiming at highlighting the quality of nuclear power and consequently paving the way to public acceptance; - the environment considered both as resource of raw materials and medium essential for life continuation, which should be appropriately protected to ensure healthy and sustainable development of human society; its analysis is also presented in chapter 1 and 7, where the problem of safe management of radioactive waste is addressed too; - investigation methods based on information science of nuclear systems, applied in carrying out the nuclear strategy and planning are widely analyzed in the chapter 2, 3 and 6; - optimizing the processes by following up the structure of investment and operation costs, and, generally, the management of nuclear units is treated in the chapter 5 and 7; - nuclear weapon proliferation as a possible consequence of nuclear power generation is treated as a legal issue. The development of Romanian NPP at Cernavoda, practically, the core of the National Nuclear Programme, is described in chapter 8. Actually, the originality of the present work consists in the selection and adaptation from a multitude of mathematical models applicable to the local and specific conditions of nuclear power plant at Cernavoda. The Romanian economy development and power development oriented towards reduction of fossil fuel consumption and protection of environment, most reliably ensured by the nuclear power, is discussed in the frame of the world trends of the energy production. Various scenarios are

  9. Pursuing the method of multiple working hypotheses for hydrological modeling

    NARCIS (Netherlands)

    Clark, M.P.; Kavetski, D.; Fenicia, F.

    2011-01-01

    Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding

  10. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Science.gov (United States)

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  11. Methods for modeling chinese hamster ovary (cho) cell metabolism

    DEFF Research Database (Denmark)

    2015-01-01

    Embodiments of the present invention generally relate to the computational analysis and characterization biological networks at the cellular level in Chinese Hamster Ovary (CHO) cells. Based on computational methods utilizing a hamster reference genome, the invention provides methods for identify...

  12. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  13. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  14. On an Estimation Method for an Alternative Fractionally Cointegrated Model

    DEFF Research Database (Denmark)

    Carlini, Federico; Łasak, Katarzyna

    In this paper we consider the Fractional Vector Error Correction model proposed in Avarucci (2007), which is characterized by a richer lag structure than models proposed in Granger (1986) and Johansen (2008, 2009). We discuss the identification issues of the model of Avarucci (2007), following th...

  15. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Science.gov (United States)

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  16. Bayesian inference method for stochastic damage accumulation modeling

    International Nuclear Information System (INIS)

    Jiang, Xiaomo; Yuan, Yong; Liu, Xian

    2013-01-01

    Damage accumulation based reliability model plays an increasingly important role in successful realization of condition based maintenance for complicated engineering systems. This paper developed a Bayesian framework to establish stochastic damage accumulation model from historical inspection data, considering data uncertainty. Proportional hazards modeling technique is developed to model the nonlinear effect of multiple influencing factors on system reliability. Different from other hazard modeling techniques such as normal linear regression model, the approach does not require any distribution assumption for the hazard model, and can be applied for a wide variety of distribution models. A Bayesian network is created to represent the nonlinear proportional hazards models and to estimate model parameters by Bayesian inference with Markov Chain Monte Carlo simulation. Both qualitative and quantitative approaches are developed to assess the validity of the established damage accumulation model. Anderson–Darling goodness-of-fit test is employed to perform the normality test, and Box–Cox transformation approach is utilized to convert the non-normality data into normal distribution for hypothesis testing in quantitative model validation. The methodology is illustrated with the seepage data collected from real-world subway tunnels.

  17. United nations scientific committee on the effects of atomic radiation (UNSCEAR) and its forty-ninth session

    International Nuclear Information System (INIS)

    Pan Ziqiang; Xiu Binglin

    2000-01-01

    The author describes the brief history of United Nations Scientific Committee on the Effects of Atomic Radiation and main issues under discussion at the Forty-ninth session of UNSCEAR. During the session UNSCEAR completed its 2000 Report and scientific Annexes to the General Assembly. The report with scientific Annexes will be published in this year. The author discusses noticeable aspects and make a suggestion for future work

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  19. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  20. Modern methods in collisional-radiative modeling of plasmas

    CERN Document Server

    2016-01-01

    This book provides a compact yet comprehensive overview of recent developments in collisional-radiative (CR) modeling of laboratory and astrophysical plasmas. It describes advances across the entire field, from basic considerations of model completeness to validation and verification of CR models to calculation of plasma kinetic characteristics and spectra in diverse plasmas. Various approaches to CR modeling are presented, together with numerous examples of applications. A number of important topics, such as atomic models for CR modeling, atomic data and its availability and quality, radiation transport, non-Maxwellian effects on plasma emission, ionization potential lowering, and verification and validation of CR models, are thoroughly addressed. Strong emphasis is placed on the most recent developments in the field, such as XFEL spectroscopy. Written by leading international research scientists from a number of key laboratories, the book offers a timely summary of the most recent progress in this area. It ...

  1. Forty years trends in timing of pubertal growth spurt in 157,000 Danish school children

    DEFF Research Database (Denmark)

    Aksglæde, Lise; Olsen, Lina Wøhlk; Sørensen, Thorkild I.A.

    2008-01-01

    to 1969 who attended primary school in the Copenhagen Municipality. 135,223 girls and 21,612 boys fulfilled the criteria for determining age at OGS and age at PHV. These physiological events were used as markers of pubertal development in our computerized method in order to evaluate any secular trends...... in pubertal maturation during the study period (year of birth 1930 to 1969). In this period, age at OGS declined statistically significantly by 0.2 and 0.4 years in girls and boys, respectively, whereas age at PHV declined statistically significantly by 0.5 and 0.3 years in girls and boys, respectively...

  2. Forty Cases of Insomnia Treated by Suspended Moxibustion at Baihui (GV 20)

    Institute of Scientific and Technical Information of China (English)

    JU Yan-li; CHI Xu; LIU Jian-xin

    2009-01-01

    Objective:To observe the therapeutic effect of suspended moxibustion at Baihui (GV 20) for insomnia.Methods: 75 cases were divided randomly into two groups, with 40 cases in the treatment group treated by suspended moxibustion over Baihui (GV 20) and 35 cases in the control group treated by oral administration of Estazolam. Results: The difference in therapeutic effect between the two groups was not statistically significant (P>0.1). Conclusion: It was concluded that suspended moxibustion at Baihui (GV 20) is as effective as Estazolam for insomnia.

  3. Numerical Modelling of the Special Light Source with Novel R-FEM Method

    Directory of Open Access Journals (Sweden)

    Pavel Fiala

    2008-01-01

    Full Text Available This paper presents information about new directions in the modelling of lighting systems, and an overview of methods for the modelling of lighting systems. The novel R-FEM method is described, which is a combination of the Radiosity method and the Finite Elements Method (FEM. The paper contains modelling results and their verification by experimental measurements and by the Matlab simulation for this R-FEM method.

  4. Topic models: A novel method for modeling couple and family text data

    Science.gov (United States)

    Atkins, David C.; Rubin, Tim N.; Steyvers, Mark; Doeden, Michelle A.; Baucom, Brian R.; Christensen, Andrew

    2012-01-01

    Couple and family researchers often collect open-ended linguistic data – either through free response questionnaire items or transcripts of interviews or therapy sessions. Because participant's responses are not forced into a set number of categories, text-based data can be very rich and revealing of psychological processes. At the same time it is highly unstructured and challenging to analyze. Within family psychology analyzing text data typically means applying a coding system, which can quantify text data but also has several limitations, including the time needed for coding, difficulties with inter-rater reliability, and defining a priori what should be coded. The current article presents an alternative method for analyzing text data called topic models (Steyvers & Griffiths, 2006), which has not yet been applied within couple and family psychology. Topic models have similarities with factor analysis and cluster analysis in that topic models identify underlying clusters of words with semantic similarities (i.e., the “topics”). In the present article, a non-technical introduction to topic models is provided, highlighting how these models can be used for text exploration and indexing (e.g., quickly locating text passages that share semantic meaning) and how output from topic models can be used to predict behavioral codes or other types of outcomes. Throughout the article a collection of transcripts from a large couple therapy trial (Christensen et al., 2004) is used as example data to highlight potential applications. Practical resources for learning more about topic models and how to apply them are discussed. PMID:22888778

  5. Small-angle physics at the intersecting storage rings forty years later

    International Nuclear Information System (INIS)

    Amaldi, Ugo

    2012-01-01

    It is often said that the ISR did not have the detectors needed to discover fundamental phenomena made accessible by its large and new energy range. This is certainly true for ‘high-momentum-transfer physics’, which, since the end of the 1960s, became a main focus of research, but the statement does not apply to the field that is the subject of this paper. In fact, looking back to the results obtained at the ISR by the experiments that were programmed to study ‘small-angle physics’, one can safely say that the detectors were very well suited to the tasks and performed much better than foreseen. As far as the results are concerned, in this particular corner of hadron–hadron physics, new phenomena were discovered, unexpected scaling laws were found and the first detailed studies of that elusive concept, which goes under the name ‘pomeron’, were performed, opening the way to phenomena that we hope will be observed at the LHC. Moreover, some techniques and methods have had a lasting influence: all colliders had and have their Roman pots, and the different methods developed at the ISR for measuring the luminosity are still in use.

  6. Forty-five degree backscattering-mode nonlinear absorption imaging in turbid media.

    Science.gov (United States)

    Cui, Liping; Knox, Wayne H

    2010-01-01

    Two-color nonlinear absorption imaging has been previously demonstrated with endogenous contrast of hemoglobin and melanin in turbid media using transmission-mode detection and a dual-laser technology approach. For clinical applications, it would be generally preferable to use backscattering mode detection and a simpler single-laser technology. We demonstrate that imaging in backscattering mode in turbid media using nonlinear absorption can be obtained with as little as 1-mW average power per beam with a single laser source. Images have been achieved with a detector receiving backscattered light at a 45-deg angle relative to the incoming beams' direction. We obtain images of capillary tube phantoms with resolution as high as 20 microm and penetration depth up to 0.9 mm for a 300-microm tube at SNR approximately 1 in calibrated scattering solutions. Simulation results of the backscattering and detection process using nonimaging optics are demonstrated. A Monte Carlo-based method shows that the nonlinear signal drops exponentially as the depth increases, which agrees well with our experimental results. Simulation also shows that with our current detection method, only 2% of the signal is typically collected with a 5-mm-radius detector.

  7. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    Science.gov (United States)

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.

    Science.gov (United States)

    Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L

    2016-01-01

    The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.

  9. Age replacement models: A summary with new perspectives and methods

    International Nuclear Information System (INIS)

    Zhao, Xufeng; Al-Khalifa, Khalifa N.; Magid Hamouda, Abdel; Nakagawa, Toshio

    2017-01-01

    Age replacement models are fundamental to maintenance theory. This paper summarizes our new perspectives and hods in age replacement models: First, we optimize the expected cost rate for a required availability level and vice versa. Second, an asymptotic model with simple calculation is proposed by using the cumulative hazard function skillfully. Third, we challenge the established theory such that preventive replacement should be non-random and only corrective replacement should be made for the unit with exponential failure. Fourth, three replacement policies with random working cycles are discussed, which are called overtime replacement, replacement first, and replacement last, respectively. Fifth, the policies of replacement first and last are formulated with general models. Sixth, age replacement is modified for the situation when the economical life cycle of the unit is a random variable with probability distribution. Finally, models of a parallel system with constant and random number of units are taken into considerations. The models of expected cost rates are obtained and optimal replacement times to minimize them are discussed analytically and computed numerically. Further studies and potential applications are also indicated at the end of discussions of the above models. - Highlights: • Optimization of cost rate for availability level is discussed and vice versa. • Asymptotic and random replacement models are discussed. • Overtime replacement, replacement first and replacement last are surveyed. • Replacement policy with random life cycle is given. • A parallel system with random number of units is modeled.

  10. Modelling Of Flotation Processes By Classical Mathematical Methods - A Review

    Science.gov (United States)

    Jovanović, Ivana; Miljanović, Igor

    2015-12-01

    Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.

  11. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced

  12. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  13. Essential medicines: an overview of some milestones in the last forty years (1975-2013

    Directory of Open Access Journals (Sweden)

    José Antonio Pagés

    2013-06-01

    Full Text Available Despite progress in the last four decades in terms of access to essential medicines, more than a third of the world population especially from the poorest countries have serious difficulties in accessing the medicines they need at an affordable price and with the right quality. Already in 1975 the 28th World Health Assembly discussed the need to set recommendations regarding selection and acquisition of medicines at reasonable prices and proven quality to meet national health needs. Consequently in 1977, the first WHO Model List of Essential Medicines was prepared by an expert committee. Since then, the list has been subjected to a series of updating and dissemination processes, together with discussion about the cost, patents and quality of medicines, as well as information on safety and effectiveness of each drug that is listed. The article addresses how this process has evolved from the beginning to present day.

  14. The use of solvent extraction in the nuclear fuel cycle, forty years of progress

    International Nuclear Information System (INIS)

    Germain, M.

    1990-01-01

    The high degree of purity required for the fissile and fertile elements used as fuels in nuclear reactors has made solvent extraction the choice as the purification method in the different steps of the fuel cycle. This technique, owing to its specificity, and its adaptability both to continuous multistage processes and to remote control, has served to achieve the requisite purities with safe, reliable operation. A review of the different steps of the cycle including uranium and thorium production, uranium enrichment, reprocessing, and the recovery of transuranics, highlights the diversity of the solvents used and the improvements made to the processes and the equipment. According to the different authors, this technique is capable of meeting future needs, aimed to reduce the harmful effects associated with the nuclear fuel cycle to the lowest possible levels

  15. The Forty-Sixth Euro Congress on Drug Synthesis and Analysis: Snapshot †

    Directory of Open Access Journals (Sweden)

    Pavel Mucaji

    2017-10-01

    Full Text Available The 46th EuroCongress on Drug Synthesis and Analysis (ECDSA-2017 was arranged within the celebration of the 65th Anniversary of the Faculty of Pharmacy at Comenius University in Bratislava, Slovakia from 5–8 September 2017 to get together specialists in medicinal chemistry, organic synthesis, pharmaceutical analysis, screening of bioactive compounds, pharmacology and drug formulations; promote the exchange of scientific results, methods and ideas; and encourage cooperation between researchers from all over the world. The topic of the conference, “Drug Synthesis and Analysis,” meant that the symposium welcomed all pharmacists and/or researchers (chemists, analysts, biologists and students interested in scientific work dealing with investigations of biologically active compounds as potential drugs. The authors of this manuscript were plenary speakers and other participants of the symposium and members of their research teams. The following summary highlights the major points/topics of the meeting.

  16. Forty years of medical education through the eyes of Medical Teacher: From chrysalis to butterfly.

    Science.gov (United States)

    Harden, Ronald M; Lilley, Pat; McLaughlin, Jake

    2018-04-01

    To mark the 40th Anniversary of Medical Teacher, issues this year will document changes in medical education that have taken place over the past 40 years in undergraduate, postgraduate and continuing education with regard to curriculum themes and approaches, teaching and learning methods, assessment techniques and management issues. Trends such as adaptive learning will be highlighted and one issue will look at the medical school of the future. An analysis of papers published in the journal has identified four general trends in medical education - increased collaboration, greater international interest, student engagement with the education process and a move to a more evidence-informed approach to medical education. These changes over the years have been dramatic.

  17. Conservative Eulerian-Lagrangian Methods and Mixed Finite Element Methods for Modeling of Groundwater Flow and Transport

    National Research Council Canada - National Science Library

    Russell, Thomas

    2000-01-01

    New, improved computational methods for modeling of groundwater flow and transport have been formulated and implemented, with the intention of incorporating them as user options into the DoD Ground...

  18. Who Produces Ianthelline? The Arctic Sponge Stryphnus fortis or its Sponge Epibiont Hexadella dedritifera: a Probable Case of Sponge-Sponge Contamination.

    Science.gov (United States)

    Cárdenas, Paco

    2016-04-01

    The bromotyrosine derivative ianthelline was isolated recently from the Atlantic boreo-arctic deep-sea sponge Stryphnus fortis, and shown to have clear antitumor and antifouling effects. However, chemosystematics, field observations, and targeted metabolic analyses (using UPLC-MS) suggest that ianthelline is not produced by S. fortis but by Hexadella dedritifera, a sponge that commonly grows on S. fortis. This case highlights the importance of combining taxonomic and ecological knowledge to the field of sponge natural products research.

  19. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Science.gov (United States)

    David Lewis; Ralph. Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  20. The Interval Market Model in Mathematical Finance : Game Theoretic Methods

    NARCIS (Netherlands)

    Bernhard, P.; Engwerda, J.C.; Roorda, B.; Schumacher, J.M.; Kolokoltsov, V.; Saint-Pierre, P.; Aubin, J.P.

    2013-01-01

    Toward the late 1990s, several research groups independently began developing new, related theories in mathematical finance. These theories did away with the standard stochastic geometric diffusion “Samuelson” market model (also known as the Black-Scholes model because it is used in that most famous

  1. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  2. Compositions and methods for modeling Saccharomyces cerevisiae metabolism

    DEFF Research Database (Denmark)

    2012-01-01

    The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S. cerevisiae reactants to a plurality of S. cerevisiae reactions, a constraint set for the plurality of S. cerevisiae reactions, and comma...

  3. An Instructional Method for the AutoCAD Modeling Environment.

    Science.gov (United States)

    Mohler, James L.

    1997-01-01

    Presents a command organizer for AutoCAD to aid new uses in operating within the 3-D modeling environment. Addresses analyzing the problem, visualization skills, nonlinear tools, a static view of a dynamic model, the AutoCAD organizer, environment attributes, and control of the environment. Contains 11 references. (JRH)

  4. Decision support for natural resource management; models and evaluation methods

    NARCIS (Netherlands)

    Wessels, J.; Makowski, M.; Nakayama, H.

    2001-01-01

    When managing natural resources or agrobusinesses, one always has to deal with autonomous processes. These autonomous processes play a core role in designing model-based decision support systems. This chapter tries to give insight into the question of which types of models might be used in which

  5. An improved cellular automaton method to model multispecies biofilms.

    Science.gov (United States)

    Tang, Youneng; Valocchi, Albert J

    2013-10-01

    Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Assessing numerical methods used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Several computer codes are in use for predicting the behaviour of nuclear aerosols released into containment during postulated accidents in water-cooled reactors. Each of these codes uses numerical methods to discretize and integrate the equations that govern the aerosol transport process. Computers perform only algebraic operations and generate only numbers. It is in the numerical methods that sense can be made of these numbers and where they can be related to the actual solution of the equations. In this report, the numerical methods most commonly used in the aerosol transport codes are examined as special cases of a general solution procedure, the Method of Weighted Residuals. It would appear that the numerical methods used in the codes are all capable of producing reasonable answers to the mathematical problem when used with skill and care. 27 refs

  7. New Methods for Kinematic Modelling and Calibration of Robots

    DEFF Research Database (Denmark)

    Søe-Knudsen, Rune

    2014-01-01

    the accuracy in an easy and accessible way. The required equipment is accessible, since the cost is held to a minimum and can be made with conventional processing equipment. Our first method calibrates the kinematics of a robot using known relative positions measured with the robot itself and a plate...... with holes matching the robot tool flange. The second method calibrates the kinematics using two robots. This method allows the robots to carry out the collection of measurements and the adjustment, by themselves, after the robots have been connected. Furthermore, we also propose a method for restoring......Improving a robot's accuracy increases its ability to solve certain tasks, and is therefore valuable. Practical ways of achieving this improved accuracy, even after robot repair, is also valuable. In this work, we introduce methods that improve the robot's accuracy and make it possible to maintain...

  8. Sparse QSAR modelling methods for therapeutic and regenerative medicine

    Science.gov (United States)

    Winkler, David A.

    2018-02-01

    The quantitative structure-activity relationships method was popularized by Hansch and Fujita over 50 years ago. The usefulness of the method for drug design and development has been shown in the intervening years. As it was developed initially to elucidate which molecular properties modulated the relative potency of putative agrochemicals, and at a time when computing resources were scarce, there is much scope for applying modern mathematical methods to improve the QSAR method and to extending the general concept to the discovery and optimization of bioactive molecules and materials more broadly. I describe research over the past two decades where we have rebuilt the unit operations of the QSAR method using improved mathematical techniques, and have applied this valuable platform technology to new important areas of research and industry such as nanoscience, omics technologies, advanced materials, and regenerative medicine. This paper was presented as the 2017 ACS Herman Skolnik lecture.

  9. Forty-five years of schizophrenia trials in Italy: a survey

    Directory of Open Access Journals (Sweden)

    Purgato Marianna

    2012-04-01

    Full Text Available Abstract Background Well-designed and properly executed randomized controlled trials (RCTs provide the best evidence on the efficacy of healthcare interventions. Mental health has a strong tradition of using trial to evaluate treatments, but the translation of research to clinical practice is not always easy. Even well-conducted trials do not necessarily address the needs of every day care and trials can reflect local needs and the specific culture in which they are undertaken. Generalizing results to other contexts can become problematic but these trials may, nevertheless, be very helpful within their own context. Moreover, pathways for drug approval can be different depending on local regulatory agencies. Local trials are helpful for decision-making in the region from which they come, but should not be viewed in isolation. National quantity and quality of trials may vary across nations. The aim of this study is to quantify trialing activity in Italy from 1948 until 2009 and to describe characteristics of these trials. In addition, we evaluated change over time in three keys aspects: sample size, follow-up duration, and number of outcomes. Methods We used the Cochrane Schizophrenia Group's register that contains 16,000 citations to 13,000 studies relating only to people with schizophrenia or schizophrenia-like illness. Randomized controlled trials and controlled clinical trials undertaken in Italy and involving pharmacological interventions were included. Results The original search identified 155 records of potentially eligible studies, 74 of which were excluded because do not meet inclusion criteria. A total of 81 studies were included in the analysis. The majority of trials were conducted in north Italy, and published in international journals between 1981 and 1995. The majority of studies (52 out of 81 used standardized diagnostic criteria for schizophrenia disorder. They were defined as randomized and used blind methods to administer

  10. Forty years of erratic insecticide resistance evolution in the mosquito Culex pipiens.

    Directory of Open Access Journals (Sweden)

    Pierrick Labbé

    2007-11-01

    Full Text Available One view of adaptation is that it proceeds by the slow and steady accumulation of beneficial mutations with small effects. It is difficult to test this model, since in most cases the genetic basis of adaptation can only be studied a posteriori with traits that have evolved for a long period of time through an unknown sequence of steps. In this paper, we show how ace-1, a gene involved in resistance to organophosphorous insecticide in the mosquito Culex pipiens, has evolved during 40 years of an insecticide control program. Initially, a major resistance allele with strong deleterious side effects spread through the population. Later, a duplication combining a susceptible and a resistance ace-1 allele began to spread but did not replace the original resistance allele, as it is sublethal when homozygous. Last, a second duplication, (also sublethal when homozygous began to spread because heterozygotes for the two duplications do not exhibit deleterious pleiotropic effects. Double overdominance now maintains these four alleles across treated and nontreated areas. Thus, ace-1 evolution does not proceed via the steady accumulation of beneficial mutations. Instead, resistance evolution has been an erratic combination of mutation, positive selection, and the rearrangement of existing variation leading to complex genetic architecture.

  11. From Precaution to Peril: Public Relations Across Forty Years of Genetic Engineering.

    Science.gov (United States)

    Hogan, Andrew J

    2016-12-01

    The Asilomar conference on genetic engineering in 1975 has long been pointed to by scientists as a model for internal regulation and public engagement. In 2015, the organizers of the International Summit on Human Gene Editing in Washington, DC looked to Asilomar as they sought to address the implications of the new CRISPR gene editing technique. Like at Asilomar, the conveners chose to limit the discussion to a narrow set of potential CRISPR applications, involving inheritable human genome editing. The adoption by scientists in 2015 of an Asilomar-like script for discussing genetic engineering offers historians the opportunity to analyze the adjustments that have been made since 1975, and to identify the blind spots that remain in public engagement. Scientists did take important lessons from the fallout of their limited engagement with public concerns at Asilomar. Nonetheless, the scientific community has continued to overlook some of the longstanding public concerns about genetic engineering, in particular the broad and often covert genetic modification of food products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Forty Years of Research on Xeroderma Pigmentosum at the US National Institutes of Health†

    Science.gov (United States)

    Kraemer, Kenneth H.; DiGiovanna, John J.

    2014-01-01

    In 1968, Dr. James Cleaver reported defective DNA repair in cultured cells from patients with xeroderma pigmentosum. This link between clinical disease and molecular pathophysiology has sparked interest in understanding not only the clinical characteristics of sun sensitivity, damage and cancer that occurred in XP patients but also the mechanisms underlying the damage and repair. While affected patients are rare, their exaggerated UV damage provides a window into the workings of DNA repair. These studies have clarified the importance of a functioning DNA repair system to the maintenance of skin and neurologic health in the general population. Understanding the role of damage in causing cancer, neurologic degeneration, hearing loss and internal cancers provides an opportunity for prevention and treatment. Characterizing complementation groups pointed to the importance of different underlying genes. Studying differences in cancer age of onset and underlying molecular signatures in cancers occurring either in XP patients or the general population has led to insights into differences in carcinogenic mechanisms. The accelerated development of cancers in XP has been used as a model to discover new cancer chemopreventive agents. An astute insight can be a “tipping point” triggering decades of productive inquiry. PMID:25220021

  13. Forty years of research on xeroderma pigmentosum at the US National Institutes of Health.

    Science.gov (United States)

    Kraemer, Kenneth H; DiGiovanna, John J

    2015-01-01

    In 1968, Dr. James Cleaver reported defective DNA repair in cultured cells from patients with xeroderma pigmentosum. This link between clinical disease and molecular pathophysiology has sparked interest in understanding not only the clinical characteristics of sun sensitivity, damage and cancer that occurred in XP patients but also the mechanisms underlying the damage and repair. While affected patients are rare, their exaggerated UV damage provides a window into the workings of DNA repair. These studies have clarified the importance of a functioning DNA repair system to the maintenance of skin and neurologic health in the general population. Understanding the role of damage in causing cancer, neurologic degeneration, hearing loss and internal cancers provides an opportunity for prevention and treatment. Characterizing complementation groups pointed to the importance of different underlying genes. Studying differences in cancer age of onset and underlying molecular signatures in cancers occurring either in XP patients or the general population has led to insights into differences in carcinogenic mechanisms. The accelerated development of cancers in XP has been used as a model to discover new cancer chemopreventive agents. An astute insight can be a "tipping point" triggering decades of productive inquiry. © 2015 The American Society of Photobiology.

  14. Radiofrequency ablation of hepatic metastasis: Results of treatment in forty patients

    Directory of Open Access Journals (Sweden)

    Rath G

    2008-01-01

    Full Text Available Aim: To evaluate the local control of hepatic metastasis with radiofrequency ablation treatment. Materials and Methods: We did a retrospective analysis in 40 patients treated with radiofrequency ablation for hepatic metastasis. The tumors ablated included up to two metastatic liver lesions, with primaries in breast, gastrointestinal tract, cervix, etc. Radiofrequency ablation was performed under general anesthesia in all cases, using ultrasound guidance. Radionics Cool-Tip RF System was used to deliver the treatment. Results: The median age of patients treated was 49 years. There were 13 female and 27 male patients. The median tumor size ablated was 1.5 cm (0.75-4.0 cm. A total of 52 radiofrequency ablation cycles were delivered. Successful ablation was achieved in all patients with hepatic metastasis less than 3 cm in size. Pain was the most common complication seen (75%. One patients developed skin burns. At 2-year follow-up 7.5% of patients had locally recurrent disease. Conclusions: Radiofrequency ablation is a minimally invasive treatment modality. It can be useful in a select group of patients with solitary liver metastasis of less than 3 cm size.

  15. The International Nuclear Information System. The first forty years 1970-2010 (Translated document)

    International Nuclear Information System (INIS)

    Itabashi, Keizo

    2010-10-01

    The Statute of the IAEA that came into force in July 1957. It was with the desire to more adequately fulfill the statutory function that during the 1960's the Agency began exploring the possibility of establishing a scheme that would provide computerized access to a comprehensive collection of references to the world's nuclear literature. The outcome of these efforts was the establishment of the International Nuclear Information System (INIS) and produced its first products in May 1970. The system was designed as an international cooperative venture, requiring the active participation of its members. It started operations with 25 members and the success and usefulness of the system has been proven by the fact that present membership is 146. The present report describes the road that led to the creation of INIS. It also describes the present operation of the system, the current methods used to collect and process the data on nuclear literature and the various products and services that the system places at the disposal of its users. Furthermore, it gives insights into current thinking for future developments that will facilitate access to an increasing variety of nuclear related information available from the IAEA, bibliographic and numerical data, full text of published and 'grey literature', multilingual nuclear terminological information as well as facilitate access to other sources of nuclear related information maintained outside the IAEA. (author)

  16. The International Nuclear Information System. The first forty years 1970-2010 (Translated document)

    Energy Technology Data Exchange (ETDEWEB)

    Itabashi, Keizo [Japan Atomic Energy Agency, Intellectual Resources Department, Tokai, Ibaraki (Japan)

    2010-10-15

    The Statute of the IAEA that came into force in July 1957. It was with the desire to more adequately fulfill the statutory function that during the 1960's the Agency began exploring the possibility of establishing a scheme that would provide computerized access to a comprehensive collection of references to the world's nuclear literature. The outcome of these efforts was the establishment of the International Nuclear Information System (INIS) and produced its first products in May 1970. The system was designed as an international cooperative venture, requiring the active participation of its members. It started operations with 25 members and the success and usefulness of the system has been proven by the fact that present membership is 146. The present report describes the road that led to the creation of INIS. It also describes the present operation of the system, the current methods used to collect and process the data on nuclear literature and the various products and services that the system places at the disposal of its users. Furthermore, it gives insights into current thinking for future developments that will facilitate access to an increasing variety of nuclear related information available from the IAEA, bibliographic and numerical data, full text of published and 'grey literature', multilingual nuclear terminological information as well as facilitate access to other sources of nuclear related information maintained outside the IAEA. (author)

  17. The International Nuclear Information System. The first forty years 1970-2010

    International Nuclear Information System (INIS)

    Todeschini, Claudio

    2010-10-01

    The Statute of the IAEA came into force in July 1957. It was with the desire to more adequately fulfill the statutory function that during the 1960's the Agency began exploring the possibility of establishing a scheme that would provide computerized access to a comprehensive collection of references to the world's nuclear literature. The outcome of these efforts was the establishment of the International Nuclear Information System (INIS) that produced its first products in May 1970. The system was designed as an international cooperative venture, requiring the active participation of its members. It started operations with 25 members and the success and usefulness of the system has been proven by the fact that present membership is 146. The present report describes the road that led to the creation of INIS. It also describes the present operation of the system, the current methods used to collect and process the data on nuclear literature and the various products and services that the system places at the disposal of its users. Furthermore, it gives insights into current thinking for future developments that will facilitate access to an increasing variety of nuclear related information available from the IAEA, bibliographic and numerical data, full text of published and 'grey literature', multilingual nuclear terminology information as well as facilitate access to other sources of nuclear related information maintained outside the IAEA

  18. 4R Water Quality Impacts: An Assessment and Synthesis of Forty Years of Drainage Nitrogen Losses.

    Science.gov (United States)

    Christianson, L E; Harmel, R D

    2015-11-01

    The intersection of agricultural drainage and nutrient mobility in the environment has led to multiscale water quality concerns. This work reviewed and quantitatively analyzed nearly 1,000 site-years of subsurface tile drainage nitrogen (N) load data to develop a more comprehensive understanding of the impacts of 4R practices (application of the right source of nutrients, at the right rate and time, and in the right place) within drained landscapes across North America. Using drainage data newly compiled in the "Measured Annual Nutrient loads from AGricultural Environments" (MANAGE) database, relationships were developed across N application rates for nitrate N drainage loads and corn ( L.) yields. The lack of significant differences between N application timing or application method was inconsistent with the current emphasis placed on application timing, in particular, as a water quality improvement strategy ( = 0.934 and 0.916, respectively). Broad-scale analyses such as this can help identify major trends for water quality, but accurate implementation of the 4R approach will require site-specific knowledge to balance agronomic and environmental goals. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  19. CAD ACTIVE MODELS: AN INNOVATIVE METHOD IN ASSEMBLY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    NADDEO Alessandro

    2010-07-01

    Full Text Available The aim of this work is to show the use and the versatility of the active models in different applications. It has been realized an active model of a cylindrical spring and it has been applied in two mechanisms, different for typology and for backlash loads. The first example is a dynamometer in which nthe cylindrical spring is loaded by traction forces, while the second example is made up from a pressure valve in which the cylindrical-conic spring works under compression. The imposition of the loads in both cases, has allowed us to evaluate the model of the mechanism in different working conditions, also in assembly environment.

  20. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  1. Congestion cost allocation method in a pool model

    International Nuclear Information System (INIS)

    Jung, H.S.; Hur, D.; Park, J.K.

    2003-01-01

    The congestion cost caused by transmission capacities and voltage limit is an important issue in a competitive electricity market. To allocate the congestion cost equitably, the active constraints in a constrained dispatch and the sequence of these constraints should be considered. A multi-stage method is proposed which reflects the effects of both the active constraints and the sequence. In a multi-stage method, the types of congestion are analysed in order to consider the sequence, and the relationship between congestion and the active constraints is derived in a mathematical way. The case study shows that the proposed method can give more accurate and equitable signals to customers. (Author)

  2. Model independent method to deconvolve hard X-ray spectra

    Energy Technology Data Exchange (ETDEWEB)

    Polcaro, V.F.; Bazzano, A.; Ubertini, P.; La Padula, C. (Consiglio Nazionale delle Ricerche, Frascati (Italy). Lab. di Astrofisica Spaziale); Manchanda, R.K. (Tata Inst. of Fundamental Research, Bombay (India))

    1984-07-01

    A general purpose method to deconvolve the energy spectra detected by means of the use of a hard X-ray telescope is described. The procedure does not assume any form of input spectrum and the observed energy loss spectrum is directly deconvolved into the incident photon spectrum, the form of which can be determined independently of physical interpretation of the data. Deconvolution of the hard X-ray spectrum of Her X-1, detected during the HXR 81M experiment, by the method independent method is presented.

  3. Semigroup Method on a MX/G/1 Queueing Model

    Directory of Open Access Journals (Sweden)

    Alim Mijit

    2013-01-01

    Full Text Available By using the Hille-Yosida theorem, Phillips theorem, and Fattorini theorem in functional analysis we prove that the MX/G/1 queueing model with vacation times has a unique nonnegative time-dependent solution.

  4. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-01-01

    In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis

  5. A public health decision support system model using reasoning methods.

    Science.gov (United States)

    Mera, Maritza; González, Carolina; Blobel, Bernd

    2015-01-01

    Public health programs must be based on the real health needs of the population. However, the design of efficient and effective public health programs is subject to availability of information that can allow users to identify, at the right time, the health issues that require special attention. The objective of this paper is to propose a case-based reasoning model for the support of decision-making in public health. The model integrates a decision-making process and case-based reasoning, reusing past experiences for promptly identifying new population health priorities. A prototype implementation of the model was performed, deploying the case-based reasoning framework jColibri. The proposed model contributes to solve problems found today when designing public health programs in Colombia. Current programs are developed under uncertain environments, as the underlying analyses are carried out on the basis of outdated and unreliable data.

  6. Adaptive Maneuvering Frequency Method of Current Statistical Model

    Institute of Scientific and Technical Information of China (English)

    Wei Sun; Yongjian Yang

    2017-01-01

    Current statistical model(CSM) has a good performance in maneuvering target tracking. However, the fixed maneuvering frequency will deteriorate the tracking results, such as a serious dynamic delay, a slowly converging speedy and a limited precision when using Kalman filter(KF) algorithm. In this study, a new current statistical model and a new Kalman filter are proposed to improve the performance of maneuvering target tracking. The new model which employs innovation dominated subjection function to adaptively adjust maneuvering frequency has a better performance in step maneuvering target tracking, while a fluctuant phenomenon appears. As far as this problem is concerned, a new adaptive fading Kalman filter is proposed as well. In the new Kalman filter, the prediction values are amended in time by setting judgment and amendment rules,so that tracking precision and fluctuant phenomenon of the new current statistical model are improved. The results of simulation indicate the effectiveness of the new algorithm and the practical guiding significance.

  7. On beam propagation methods for modelling in integrated optics

    NARCIS (Netherlands)

    Hoekstra, Hugo

    1997-01-01

    In this paper the main features of the Fourier transform and finite difference beam propagation methods are summarized. Limitations and improvements, related to the paraxial approximation, finite differencing and tilted structures are discussed.

  8. Review of methods for modelling forest fire risk and hazard

    African Journals Online (AJOL)

    user

    -Leal et al., 2006). Stolle and Lambin (2003) noted that flammable fuel depends on ... advantages over conventional fire detection and fire monitoring methods because ofits repetitive andconsistent coverage over large areas of land (Martin et ...

  9. A numerical method for eigenvalue problems in modeling liquid crystals

    Energy Technology Data Exchange (ETDEWEB)

    Baglama, J.; Farrell, P.A.; Reichel, L.; Ruttan, A. [Kent State Univ., OH (United States); Calvetti, D. [Stevens Inst. of Technology, Hoboken, NJ (United States)

    1996-12-31

    Equilibrium configurations of liquid crystals in finite containments are minimizers of the thermodynamic free energy of the system. It is important to be able to track the equilibrium configurations as the temperature of the liquid crystals decreases. The path of the minimal energy configuration at bifurcation points can be computed from the null space of a large sparse symmetric matrix. We describe a new variant of the implicitly restarted Lanczos method that is well suited for the computation of extreme eigenvalues of a large sparse symmetric matrix, and we use this method to determine the desired null space. Our implicitly restarted Lanczos method determines adoptively a polynomial filter by using Leja shifts, and does not require factorization of the matrix. The storage requirement of the method is small, and this makes it attractive to use for the present application.

  10. Diffusion models in metamorphic thermo chronology: philosophy and methods

    International Nuclear Information System (INIS)

    Munha, Jose Manuel; Tassinari, Colombo Celso Gaeta

    1999-01-01

    Understanding kinetics of diffusion is of major importance to the interpretation of isotopic ages in metamorphic rocks. This paper provides a review of concepts and methodologies involved on the various diffusion models that can be applied to radiogenic systems in cooling rocks. The central concept of closure temperature is critically discussed and quantitative estimates for the various diffusion models are evaluated, in order to illustrate the controlling factors and the limits of their practical application. (author)

  11. Study on geological environment model using geostatistics method

    International Nuclear Information System (INIS)

    Honda, Makoto; Suzuki, Makoto; Sakurai, Hideyuki; Iwasa, Kengo; Matsui, Hiroya

    2005-03-01

    The purpose of this study is to develop the geostatistical procedure for modeling geological environments and to evaluate the quantitative relationship between the amount of information and the reliability of the model using the data sets obtained in the surface-based investigation phase (Phase 1) of the Horonobe Underground Research Laboratory Project. This study lasts for three years from FY2004 to FY2006 and this report includes the research in FY2005 as the second year of three-year study. In FY2005 research, the hydrogeological model was built as well as FY2004 research using the data obtained from the deep boreholes (HDB-6, 7 and 8) and the ground magnetotelluric (AMT) survey which were executed in FY2004 in addition to the data sets used in the first year of study. Above all, the relationship between the amount of information and the reliability of the model was demonstrated through a comparison of the models at each step which corresponds to the investigation stage in each FY. Furthermore, the statistical test was applied for detecting the difference of basic statistics of various data due to geological features with a view to taking the geological information into the modeling procedures. (author)

  12. The forty years of vermicular graphite cast iron development in China (PartⅠ

    Directory of Open Access Journals (Sweden)

    CHEN Zheng-de

    2007-05-01

    Full Text Available In China, the research and development of vermicular graphite cast iron (VGCI as a new type of engineering material, were started in the same period as in other developed countries; however, its actual industrial application was even earlier. In China, the deep and intensive studies on VGCI began as early as the 1960s. According to the incomplete statistics to date, more than 600 papers on VGCI have been published by Chinese researchers and scholars at national and international conferences, and in technical journals. More than ten types of production methods and more than thirty types of treatment alloy have been studied. Formulae for calculating the critical addition of treatment alloy required to produce VGCI have been put forward, and mechanisms for explaining the formation of dross during treatment were brought forward. The casting properties, metallographic structure, mechanical and physical properties and machining performance of VGCI, as well as the relationships between them, have all been studied in detail. The Chinese Standards for VGCI and VGCI metallographic structure have been issued. In China, the primary crystallization of VGCI has been studied by many researchers and scholars. The properties of VGCI can be improved by heat treatment and addition of alloying elements enabling its applications to be further expanded. Hundreds of kinds of VGCI castings have been produced and used in vehicles, engines, mining equipment, metallurgical products serviced under alternating thermal load, machinery, hydraulic components, textile machine parts and military applications. The heaviest VGCI casting produced is 38 tons and the lightest is only 1 kg. Currently, the annual production of the VGCI in China is about 200 000 tons. The majority of castings are made from cupola iron without pre-treatment, however, they are also produced from electric furnaces and by duplex melting from cupolaelectric furnaces or blast furnace-electric furnace

  13. A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Directory of Open Access Journals (Sweden)

    Kemin Wang

    2014-01-01

    Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.

  14. Forty years experience in developing and using rainfall simulators under tropical and Mediterranean conditions

    Science.gov (United States)

    Pla-Sentís, Ildefonso; Nacci, Silvana

    2010-05-01

    Rainfall simulation has been used as a practical tool for evaluating the interaction of falling water drops on the soil surface, to measure both stability of soil aggregates to drop impact and water infiltration rates. In both cases it is tried to simulate the effects of natural rainfall, which usually occurs at very different, variable and erratic rates and intensities. One of the main arguments against the use of rainfall simulators is the difficulty to reproduce the size, final velocity and kinetic energy of the drops in natural rainfall. Since the early 70´s we have been developing and using different kinds of rainfall simulators, both at laboratory and field levels, and under tropical and Mediterranean soil and climate conditions, in flat and sloping lands. They have been mainly used to evaluate the relative effects of different land use and management, including different cropping systems, tillage practices, surface soil conditioning, surface covers, etc. on soil water infiltration, on runoff and on erosion. Our experience is that in any case it is impossible to reproduce the variable size distribution and terminal velocity of raindrops, and the variable changes in intensity of natural storms, under a particular climate condition. In spite of this, with the use of rainfall simulators it is possible to obtain very good information, which if it is properly interpreted in relation to each particular condition (land and crop management, rainfall characteristics, measurement conditions, etc.) may be used as one of the parameters for deducing and modelling soil water balance and soil moisture regime under different land use and management and variable climate conditions. Due to the possibility for a better control of the intensity of simulated rainfall and of the size of water drops, and the possibility to make more repeated measurements under very variable soil and land conditions, both in the laboratory and specially in the field, the better results have been

  15. Forty years of improvements in European air quality: regional policy-industry interactions with global impacts

    Directory of Open Access Journals (Sweden)

    M. Crippa

    2016-03-01

    role that technology has played in reducing emissions in 2010. However, stagnation of energy consumption at 1970 levels, but with 2010 fuel mix and energy efficiency, and assuming current (year 2010 technology and emission control standards, would have lowered today's NOx emissions by ca. 38 %, SO2 by 50 % and PM2.5 by 12 % in Europe. A reduced-form chemical transport model is applied to calculate regional and global levels of aerosol and ozone concentrations and to assess the associated impact of air quality improvements on human health and crop yield loss, showing substantial impacts of EU technologies and standards inside as well as outside Europe. We assess that the interplay of policy and technological advance in Europe had substantial benefits in Europe, but also led to an important improvement of particulate matter air quality in other parts of the world.

  16. Theory, Solution Methods, and Implementation of the HERMES Model

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, John E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Bradley W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Curtis, John P. [Atomic Weapons Establishment (AWE), Reading, Berkshire (United Kingdom); Univ. College London (UCL), Gower Street, London (United Kingdom); Springer, H. Keo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-13

    The HERMES (high explosive response to mechanical stimulus) model was developed over the past decade to enable computer simulation of the mechanical and subsequent energetic response of explosives and propellants to mechanical insults such as impacts, perforations, drops, and falls. The model is embedded in computer simulation programs that solve the non-linear, large deformation equations of compressible solid and fluid flow in space and time. It is implemented as a user-defined model, which returns the updated stress tensor and composition that result from the simulation supplied strain tensor change. Although it is multi-phase, in that gas and solid species are present, it is single-velocity, in that the gas does not flow through the porous solid. More than 70 time-dependent variables are made available for additional analyses and plotting. The model encompasses a broad range of possible responses: mechanical damage with no energetic response, and a continuous spectrum of degrees of violence including delayed and prompt detonation. This paper describes the basic workings of the model.

  17. Modeling Multi-commodity Trade Information Exchange Methods

    CERN Document Server

    Traczyk, Tomasz

    2012-01-01

    Market mechanisms are entering into new fields of economy, in which some constraints of physical world, e.g. Kirchoffs Law in power grid, must be taken into account during trading. On such markets, some of commodities, like telecommunication bandwidth or electrical energy, appear to be non-storable, and must be exchanged in real-time. On the other hand, the markets tend to react at shortest possible time, so an idea to delegate some competency to autonomous software agents is very attractive. Multi-commodity mechanism addresses the aforementioned requirements. Modeling the relationships between the commodities allows to formulate new, more sophisticated models and mechanisms, which reflect decision situations in a better manner. Application of multi-commodity approach requires solving several issues related to data modeling, communication, semantics aspects of communication, reliability, etc. This book answers some of the questions and points out promising paths for implementation and development. Presented s...

  18. A method of shadow puppet figure modeling and animation

    Institute of Scientific and Technical Information of China (English)

    Xiao-fang HUANG; Shou-qian SUN; Ke-jun ZHANG; Tian-ning XU; Jian-feng WU; Bin ZHU

    2015-01-01

    To promote the development of the intangible cultural heritage of the world, shadow play, many studies have focused on shadow puppet modeling and interaction. Most of the shadow puppet figures are still imaginary, spread by ancients, or carved and painted by shadow puppet artists, without consideration of real dimensions or the appearance of human bodies. This study proposes an algorithm to transform 3D human models to 2D puppet figures for shadow puppets, including automatic location of feature points, automatic segmentation of 3D models, automatic extraction of 2D contours, automatic clothes matching, and animation. Experiment proves that more realistic and attractive figures and animations of the shadow puppet can be generated in real time with this algorithm.

  19. Learning Methods for Dynamic Topic Modeling in Automated Behavior Analysis.

    Science.gov (United States)

    Isupova, Olga; Kuzin, Danil; Mihaylova, Lyudmila

    2017-09-27

    Semisupervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators' load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this paper proposes new learning algorithms for activity analysis in video. The activities and behaviors are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximization approach and variational Bayes inference are proposed. Theoretical derivations of the posterior estimates of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localization procedure, elegantly embedded in the topic modeling framework. It is shown that the developed learning algorithms can achieve 95% success rate. The proposed framework can be applied to a number of areas, including transportation systems, security, and surveillance.

  20. Modelling methods for co-fired pulverised fuel furnaces

    Energy Technology Data Exchange (ETDEWEB)

    L. Ma; M. Gharebaghi; R. Porter; M. Pourkashanian; J.M. Jones; A. Williams [University of Leeds, Leeds (United Kingdom). Energy and Resources Research Institute

    2009-12-15

    Co-firing of biomass and coal can be beneficial in reducing the carbon footprint of energy production. Accurate modelling of co-fired furnaces is essential to discover potential problems that may occur during biomass firing and to mitigate potential negative effects of biomass fuels, including lower efficiency due to lower burnout and NOx formation issues. Existing coal combustion models should be modified to increase reliability of predictions for biomass, including factors such as increased drag due to non-spherical particle sizes and accounting for organic compounds and the effects they have on NOx emission. Detailed biomass co-firing models have been developed and tested for a range of biomass fuels and show promising results. 32 refs., 4 figs., 3 tabs.