WorldWideScience

Sample records for hughes comput methods

  1. Computing methods

    CERN Document Server

    Berezin, I S

    1965-01-01

    Computing Methods, Volume 2 is a five-chapter text that presents the numerical methods of solving sets of several mathematical equations. This volume includes computation sets of linear algebraic equations, high degree equations and transcendental equations, numerical methods of finding eigenvalues, and approximate methods of solving ordinary differential equations, partial differential equations and integral equations.The book is intended as a text-book for students in mechanical mathematical and physics-mathematical faculties specializing in computer mathematics and persons interested in the

  2. Necrology: Hugh Carson Cutler

    Directory of Open Access Journals (Sweden)

    David L. Browman

    1999-05-01

    Full Text Available Hugh Carson Cutler, forner curator of Economic Botany at the Missouri Botanical Gardens, was one of the first generation paleoethnobotanists in this country. A pioneer in the field, and instrumental in getting American archaeologists to begin to employ flotation techniques for recovery of botanical remains. Cutler, the son of Manuel and Mary Cutler. was born September 8. 1912 in Milwaukee. Wisconsin. and died September 22, 1998 in Topeka, Kansas.

  3. Clinical outcome of Fitz-Hugh-Curtis syndrome mimicking acute biliary disease

    Institute of Scientific and Technical Information of China (English)

    Seong Yong Woo; Jin Il Kim; Dae Young Cheung; Se Hyun Cho; Soo-Heon Park; Joon-Yeol Han; Jae Kwang Kim

    2008-01-01

    AIM: To analyze the clinical characteristics of patients diagnosed with Fitz-Hugh-Curtis syndrome.METHODS: The clinical courses of patients that visited St. Mary's Hospital with abdominal pain from January 2005 to December 2006 and were diagnosed with Fitz-Hugh-Curtis syndrome were examined.RESULTS: Fitz-Hugh-Curtis syndrome was identified in 22 female patients of childbearing age; their mean age was 31.0 + 8.1 years. Fourteen of these cases presented with pain in the upper right abdomen alone or together with pain in the lower abdomen,and six patients presented with pain only in the lower abdomen. The first impression at the time of visit was acute cholecystitis or cholangitis in 10 patients and acute appendicitis or pelvic inflammatory disease in eight patients. Twenty-one patients were diagnosed by abdominal computer tomograghy (CT), and the results of abdominal sonography were normal for 10 of these patients. Ch/amydia trichomatis was isolated from 18 patients. Two patients underwent laparoscopic adhesiotomy and 20 patients were completely cured by antibiotic treatment.CONCLUSION: For women of childbearing age with acute pain in the upper right abdomen alone or together with pain in the lower abdomen, Fitz-HughCurtis syndrome should be considered during differential diagnosis. Moreover, in cases suspected to be Fitz-HughCurtis syndrome, abdominal c-r, rather than abdominal sonography, assists in the diagnosis.

  4. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface

  5. Analogue computing methods

    CERN Document Server

    Welbourne, D

    1965-01-01

    Analogue Computing Methods presents the field of analogue computation and simulation in a compact and convenient form, providing an outline of models and analogues that have been produced to solve physical problems for the engineer and how to use and program the electronic analogue computer. This book consists of six chapters. The first chapter provides an introduction to analogue computation and discusses certain mathematical techniques. The electronic equipment of an analogue computer is covered in Chapter 2, while its use to solve simple problems, including the method of scaling is elaborat

  6. The Many Worlds of Hugh Everett III

    CERN Document Server

    ,

    2011-01-01

    A review of Peter Byrne's biography of Hugh Everett III, "The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family", (Oxford University Press, 2010).

  7. On the Hughes model and numerical aspects

    KAUST Repository

    Gomes, Diogo A.

    2017-01-05

    We study a crowd model proposed by R. Hughes in [11] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solutions. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two examples.

  8. A guide to Hughes' syndrome.

    Science.gov (United States)

    Sheehan, Tina Louise

    Hughes' syndrome, or antiphospholipid syndrome, is thought to be the cause of one in four strokes in people aged less than 40 years. It is an antiinflammatory autoimmune disorder in which the blood has a tendency to clot too quickly. It can affect any artery or vein in the body and the main symptoms are thrombosis, pregnancy loss and the presence of antibodies. If detected it can be treated effectively.

  9. The Blues Poetry of Langston Hughes

    Science.gov (United States)

    Waldron, Edward E.

    1971-01-01

    The author discusses the criteria of the blues as an American art form. He then shows how Langston Hughes captures the mood, the feeling, the rhythm and the impact of the blues in his poetry. (Author/LF)

  10. Preconditioned method in parallel computation

    Institute of Scientific and Technical Information of China (English)

    Wu Ruichan; Wei Jianing

    2006-01-01

    The grid equations in decomposed domain by parallel computation are soled, and a method of local orthogonalization to solve the large-scaled numerical computation is presented. It constructs preconditioned iteration matrix by the combination of predigesting LU decomposition and local orthogonalization, and the convergence of solution is proved. Indicated from the example, this algorithm can increase the rate of computation efficiently and it is quite stable.

  11. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  12. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  13. Fitz-Hugh-Curtis syndrome: abdominal pain in women of 26 years old Síndrome de Fitz-Hugh-Curtis: dolor abdominal en mujer de 26 años

    OpenAIRE

    Liseth Rivero-Sánchez; Elsa María López-Soriano; Luisa Guarner-Aguilar

    2011-01-01

    Fitz-Hugh-Curtis syndrome is an inflammation of the liver capsule as a complication of pelvic inflammatory disease, whose most common etiologic agent is the C. trachomatis. The acute phase of the Fitz-Hugh-Curtis syndrome may present itself with pain in right upper abdomen, commonly confused with other hepatobiliary and gastrointestinal tract diseases. Definitive diagnosis is now possible with non-invasive techniques such as ultrasound, computed tomography, as well as techniques to isolate th...

  14. Langston Hughes and his poem "Harlem"

    Institute of Scientific and Technical Information of China (English)

    郝红

    2005-01-01

    @@ James Langston Hughes was born February 1,1902, in Joplin, Missouri. His parents divorced when he was a small child, and his father moved to Mexico. He was raised by his Grandmother until he was thirteen, when he moved to Lincoln,Illinois, to live with his mother and her husband, eventually settling in Cleveland, Ohio.

  15. Hugh Maaskant : architect van de vooruitgang

    NARCIS (Netherlands)

    Provoost, M.

    2003-01-01

    Hugh Maaskant (1907–1977) is best known as the architect who made the biggest mark on the post-war reconstruction of Rotterdam with such buildings as the Groothandelsgebouw, the Hilton Hotel and the Lijnbaan flats. Beginning his career in 1937 as the partner of Willem van Tijen, Maaskant embarked on

  16. Responses to Hugh Heclo's "On Thinking Institutionally"

    Science.gov (United States)

    Fennell, Robert C.; Ascough, Richard S.; Liew, Tat-siong Benny; McLain, Michael; Westfield, Nancy Lynne

    2010-01-01

    Hugh Heclo's recent book "On Thinking Institutionally" (Paradigm Publishers, 2008) analyzes changes that have taken place in the past half century in how North Americans tend to think and act in institutions. The volume is receiving particular attention as it can be applied to higher education and to religious denominations, and so deserves…

  17. Hugh Maaskant : architect van de vooruitgang

    NARCIS (Netherlands)

    Provoost, Michelle

    2003-01-01

    Hugh Maaskant (1907–1977) is best known as the architect who made the biggest mark on the post-war reconstruction of Rotterdam with such buildings as the Groothandelsgebouw, the Hilton Hotel and the Lijnbaan flats. Beginning his career in 1937 as the partner of Willem van Tijen, Maaskant embarked on

  18. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  19. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  20. Forecasting methods for computer technology

    Energy Technology Data Exchange (ETDEWEB)

    Worlton, W.J.

    1978-01-01

    How well the computer site manager avoids future dangers and takes advantage of future opportunities depends to a considerable degree on how much anticipatory information he has available. People who rise in management are expected with each successive promotion to concern themselves with events further in the future. It is the function of technology projection to increase this stock of information about possible future developments in order to put planning and decision making on a more rational basis. Past efforts at computer technology projections have an accuracy that declines exponentially with time. Thus, precisely defined technology projections beyond about three years should be used with considerable caution. This paper reviews both subjective and objective methods of technology projection and gives examples of each. For an integrated view of future prospects in computer technology, a framework for technology projection is proposed.

  1. Forecasting methods for computer technology

    Energy Technology Data Exchange (ETDEWEB)

    Worlton, W.J.

    1978-01-01

    How well the computer site manager avoids future dangers and takes advantage of future opportunities depends to a considerable degree on how much anticipatory information he has available. People who rise in management are expected with each successive promotion to concern themselves with events further in the future. It is the function of technology projection to increase this stock of information about possible future developments in order to put planning and decision making on a more rational basis. Past efforts at computer technology projections have an accuracy that declines exponentially with time. Thus, precisely defined technology projections beyond about three years should be used with considerable caution. This paper reviews both subjective and objective methods of technology projection and gives examples of each. For an integrated view of future prospects in computer technology, a framework for technology projection is proposed.

  2. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  3. Some Numerical Aspects on Crowd Motion - The Hughes Model

    KAUST Repository

    Gomes, Diogo A.

    2016-01-06

    Here, we study a crowd model proposed by R. Hughes in [5] and we describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. First, we establish a priori estimates for the solution. Second, we study radial solutions and identify a shock formation mechanism. Third, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. Finally, we propose a new numerical method and consider two numerical examples.

  4. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  5. Computational methods for stellerator configurations

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  6. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  7. Right Pleural Effusion in Fitz-Hugh-Curtis Syndrome

    Directory of Open Access Journals (Sweden)

    Tajiri,Takuma

    2006-10-01

    Full Text Available Right pleural effusion was diagnosed in a 36-year-old woman with right upper quadrant pain and fever. Enhanced pelvic computed tomography performed because of irregular genital bleeding revealed the pelvic inflammatory disease. Upon further questioning, the patient confirmed that she had recently undergone therapy for Chlamydia trachomatis infection. Therefore she was given an injection of tetracycline because we suspected Fitz-Hugh-Curtis syndrome (FHCS, a pelvic inflammatory disease characterized by perihepatitis associated with chlamydial infection. A remarkable clinical response to antibiotics was noted. The right upper quadrant pain was due to perihepatitis, and the final diagnosis was FHCS. Right pleural effusion may be caused by inflammation of the diaphragm associated with perihepatitis. Once chlamydial infection reaches the subphrenic liver, conditions in the closed space between the liver and diaphragm due to inflammatory adhesion may be conductive to chlamydial proliferation. The possibility of FHCS should be considered in patients and carefully distinguished from other abdominal diseases.

  8. Hugh Maaskant: architect van de vooruitgang

    OpenAIRE

    Provoost, M.

    2003-01-01

    Hugh Maaskant (1907–1977) is best known as the architect who made the biggest mark on the post-war reconstruction of Rotterdam with such buildings as the Groothandelsgebouw, the Hilton Hotel and the Lijnbaan flats. Beginning his career in 1937 as the partner of Willem van Tijen, Maaskant embarked on his most prolific period after establishing an independent practice in 1955. He produced the lion’s share of his work in the 1950s and ’60s, the very period architectural critics generally regard ...

  9. Donald Hugh Blocher (1928-2013).

    Science.gov (United States)

    Dowd, E Thomas

    2015-01-01

    This article memorializes Donald Hugh Blocher (1928-2013). Blocher, a giant in counseling psychology, was elected a fellow of the American Psychological Association (APA) in 1973, received a Distinguished Achievement Award from Harvard University in 1968, and served as president of APA Division 17 (Society of Counseling Psychology) in 1980. He obtained a Fulbright Lectureship at the University of Keele (United Kingdom) for 1968-1969, where he helped develop a counseling psychology program according to American standards. He also published many books, book chapters, and articles over the course of his professional career.

  10. Hugh Owen Thomas (1834-1891)

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    @@ Hugh Owen Thomas被称为英国骨科之父,他对英国乃至世界骨科作出了杰出的贡献. 早在17世纪,接骨师已经作为一个社会地位较高的职业在英国、法国兴盛起来.接骨师的技术通常是在家族内部传承而不会对外传授.

  11. Sandy ja Hugh - paar või mitte? / Hugh Grant, Sandra Bullock

    Index Scriptorium Estoniae

    Grant, Hugh, 1960-

    2003-01-01

    Romantilises komöödias "Kaks nädalat armumiseks" ("Two Weeks Notice"), režissöör Marc Lawrence, mängivad kaks kuulsat näitlejat. Staarid oma suhetest : Hugh Grant : "Sandra on geenius"; Sandra Bullock : "Oleme temaga nagu kaksikud!"

  12. Sandy ja Hugh - paar või mitte? / Hugh Grant, Sandra Bullock

    Index Scriptorium Estoniae

    Grant, Hugh, 1960-

    2003-01-01

    Romantilises komöödias "Kaks nädalat armumiseks" ("Two Weeks Notice"), režissöör Marc Lawrence, mängivad kaks kuulsat näitlejat. Staarid oma suhetest : Hugh Grant : "Sandra on geenius"; Sandra Bullock : "Oleme temaga nagu kaksikud!"

  13. Acute Charles Bonnet Syndrome following Hughes procedure.

    Science.gov (United States)

    Wilson, Michelle E; Pointdujour-Lim, Renelle; Lally, Sara; Shields, Carol L; Rabinowitz, Michael P

    2016-10-01

    A 69-year-old male experienced monocular formed visual hallucinations after occlusion of the right eye following resection of eyelid basal cell carcinoma and reconstruction with a Hughes procedure (tarsoconjunctival flap). His symptoms included recurrent, well-defined, organized, complex, formed images of small children playing in the snow. These visual phenomena occurred only in the occluded eye, began several hours after surgery, and recurred intermittently several times daily for 4 days, lasting several minutes with each occurrence. The patient retained insight into the false nature of the images throughout the duration of his symptoms, and the hallucinations resolved spontaneously while the flap was still in place. To our knowledge, this is the first reported case of Charles Bonnet Syndrome (CBS) following a Hughes procedure in a patient with normal visual acuity in the non-occluded fellow eye. Unlike other reported cases of acute onset CBS following transient monocular occlusion, hallucinations in the occluded eye remitted prior to restoration of vision in the occluded eye. Ophthalmologists should be aware of the potential for CBS following even transient monocular occlusion and should consider warning patients about its potential to occur.

  14. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  15. Obituary: Lawrence Hugh Aller, 1913-2003

    Science.gov (United States)

    Kaler, James B.

    2003-12-01

    Michigan, he taught a two-semester course in advanced general astronomy that covered nearly everything, in addition to a remarkable four-semester sequence in astrophysics (general, stellar atmospheres, nebular astrophysics, and stellar interiors). These were backed up by an extraordinary set of books. In 1943, Goldberg and he turned out the seminal toms, Stars, and Nebulae. (A solo third edition was published in 1991.) Then in 1953 arrived "The Atmospheres of the Sun and Stars" (revised a decade later), a tour de force on the physics of stellar plasmas and radiative transfer that became the bible of a generation of astronomers. "Nuclear Transformations, Stellar Interiors, and Nebulae" appeared a year later, and "Gaseous Nebulae" two years after that (rewritten in 1984 as "Physics of Thermal Gaseous Nebulae"). Not having a computer available in the early years, he used his students, creating mammoth "Aller Problems" that solved the equations for results that went into the books. Never formally published were two massive tomes of advanced general astronomy. To those of us lucky enough to have them, they serve as references to this day. His students, both undergraduate and graduate, are everywhere, their own students in turn carrying on Lawrence's ideas and work. In 1941, Lawrence married Rosalind Duncan Hall (who survives), and together they raised three children: Hugh, Gwen, and Raymond. Not only did one son become an astronomer, but so has one granddaughter (a dynasty established). Lawrence was absorbed by news and politics. He hated injustice of any kind, and let you know about it. He could entertain for hours with stories of his youth and of other astronomers, never realizing that he would also be the source of affectionate stories that would be told and retold by his own students. Of beautiful heart, he was a good father, both to his own children and to those he adopted as his students, none of whom, having been taught by him, will ever forget. Incredibly prolific, his

  16. IN MEMORIAM: Hugh P Kelly, 3 September 1931 - 29 June 1992

    Science.gov (United States)

    Hansen, Jørgen E.

    1993-01-01

    It is said that Racah was one of the few physicists who was able not only to read Hermann Weyl's book "The Theory of Groups and Quantum Mechanics" but also to understand enough to use it himself. Something similar applies to Hugh Kelly, who introduced the methods of Many-Body Perturbation Theory (MBPT), which was being developed by nuclear physicists like Keith Brueckner whom Hugh worked with as a post-doc, into atomic physics. Since atomic physics with its "known" forces is a perfect area in which to apply the methods of MBPT, it is clear that this would have happened sooner or later. Hugh Kelly's achievement is that it happened very early. Hugh's death this summer after a long battle with cancer is a great loss to atomic physics and to his many friends in the community. Hugh P Kelly got his PhD from the University of California at Berkeley in 1963 under the supervision of Kenneth Watson and in 1965 he was hired as assistant Professor at the University of Virginia. His appointment was to a large extent due to the support of M E Rose, also a nuclear physicist with interest in atoms as is clear from his book "Elementary Theory of Angular Momentum". Hugh remained at the University of Virginia for the rest of his life, over the years serving as Department Chairman, Faculty Dean and, during the last years, as University Provost, but throughout Hugh pursued his real passion which was Physics. It is a clear sign of his dedication that he is sole author on a large number of papers particularly of course in the early years when he alone was Atomic Theory in Charlottesville. Hugh contributed to many areas of atomic physics although early on photoionization became his favourite subject and the one where he made his main contribution: first for closed shell atoms, but later extending his techniques to open shell systems for which the normal MBPT techniques did not apply. Hugh had a good nose for finding projects that were of topical interest and many of his papers are

  17. Novel methods in computational finance

    CERN Document Server

    Günther, Michael; Maten, E

    2017-01-01

    This book discusses the state-of-the-art and open problems in computational finance. It presents a collection of research outcomes and reviews of the work from the STRIKE project, an FP7 Marie Curie Initial Training Network (ITN) project in which academic partners trained early-stage researchers in close cooperation with a broader range of associated partners, including from the private sector. The aim of the project was to arrive at a deeper understanding of complex (mostly nonlinear) financial models and to develop effective and robust numerical schemes for solving linear and nonlinear problems arising from the mathematical theory of pricing financial derivatives and related financial products. This was accomplished by means of financial modelling, mathematical analysis and numerical simulations, optimal control techniques and validation of models. In recent years the computational complexity of mathematical models employed in financial mathematics has witnessed tremendous growth. Advanced numerical techni...

  18. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  19. Hugh Blairs Lectures on Rhetoric and Belles Lettres

    DEFF Research Database (Denmark)

    Schatz-Jakobsen, Claus

    1989-01-01

    Artiklen nærlæser dekonstruktivt dele af den skotske retorikprofessor Hugh Blairs Lecures on Rhetoric and Belles Lettres (1783) og påviser splittelsen mellem to vidt forskellige retorik- og liltteraturhistoriske interesser, neoklassicistiske vs. romantiske....

  20. Síndrome de Hughes-Stovin

    Directory of Open Access Journals (Sweden)

    Sonia Pankl

    2015-04-01

    Full Text Available El síndrome de Hughes-Stovin es una entidad infrecuente caracterizada por trombosis venosa profunda y aneurismas de la arteria pulmonar, siendo su etiología y patogenia desconocida. Algunos autores la consideran una variante de la enfermedad de Behcet. Su curso natural es generalmente fatal. Se presenta con tos, disnea, hemoptisis, dolor torácico y fiebre. El tratamiento es con esteroides y agentes citotóxicos hasta la cirugía. Presentamos el caso de un hombre de 41 años que consultó por disnea, hemoptisis y dolor torácico, llegándose al diagnóstico de trombosis venosa profunda de miembro inferior derecho, trombo-embolismo de pulmón y aneurismas de arterias pulmonares. Recibió tratamiento con corticoides en altas dosis y 6 pulsos de ciclofosfamida de 1 gramo durante 6 meses, con regresión completa de los aneurismas y de la sintomatología.

  1. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    Science.gov (United States)

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  2. Computational Methods for Ideal Magnetohydrodynamics

    Science.gov (United States)

    Kercher, Andrew D.

    Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency

  3. Young CAS biologist gets five-year grant from Howard Hughes Medical Institute

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    Dr. TANG Chun, an outstanding young biologist from the CAS Wuhan Institute of Physics and Mathematics (WIPM) was announced in January 2012 to have won the first International Early Career Award from Howard Hughes Medical Institute with a five-year grant totaling up to 650,000 US dollars. Dr. Tang is an expert in using novel nuclear magnetic resonance methods to study the behavior and function of proteins,

  4. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  5. Computational Chemistry Using Modern Electronic Structure Methods

    Science.gov (United States)

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  6. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  7. Meshfree methods for computational fluid dynamics

    OpenAIRE

    Jícha M.; Čermák L.; Niedoba P.

    2013-01-01

    The paper deals with the convergence problem of the SPH (Smoothed Particle Hydrodynamics) meshfree method for the solution of fluid dynamics tasks. In the introductory part, fundamental aspects of mesh- free methods, their definition, computational approaches and classification are discussed. In the following part, the methods of local integral representation, where SPH belongs are analyzed and specifically the method RKPM (Reproducing Kernel Particle Method) is described. In the contribution...

  8. Some methods of computational geometry applied to computer graphics

    NARCIS (Netherlands)

    Overmars, M.H.; Edelsbrunner, H.; Seidel, R.

    1984-01-01

    Abstract Windowing a two-dimensional picture means to determine those line segments of the picture that are visible through an axis-parallel window. A study of some algorithmic problems involved in windowing a picture is offered. Some methods from computational geometry are exploited to store the

  9. Numerical bifurcation analysis of two coupled FitzHugh-Nagumo oscillators

    CERN Document Server

    Hoff, Anderson; Manchein, Cesar; Albuquerque, Holokx A

    2015-01-01

    The behavior of neurons can be modeled by the FitzHugh-Nagumo oscillator model, consisting of two nonlinear differential equations, which simulates the behavior of nerve impulse conduction through the neuronal membrane. In this work, we numerically study the dynamical behavior of two coupled FitzHugh-Nagumo oscillators. We consider unidirectional and bidirectional couplings, for which Lyapunov and isoperiodic diagrams were constructed calculating the Lyapunov exponents and the number of the local maxima of a variable in one period interval of the time-series, respectively. By numerical continuation method the bifurcation curves are also obtained for both couplings. The dynamics of the networks here investigated are presented in terms of the variation between the coupling strength of the oscillators and other parameters of the system. For the network of two oscillators unidirectionally coupled, the results show the existence of Arnold tongues, self-organized sequentially in a branch of a Stern-Brocot tree and ...

  10. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  11. Fitz-Hugh-Curtis syndrome: abdominal pain in women of 26 years old Síndrome de Fitz-Hugh-Curtis: dolor abdominal en mujer de 26 años

    Directory of Open Access Journals (Sweden)

    Liseth Rivero-Sánchez

    2011-10-01

    Full Text Available Fitz-Hugh-Curtis syndrome is an inflammation of the liver capsule as a complication of pelvic inflammatory disease, whose most common etiologic agent is the C. trachomatis. The acute phase of the Fitz-Hugh-Curtis syndrome may present itself with pain in right upper abdomen, commonly confused with other hepatobiliary and gastrointestinal tract diseases. Definitive diagnosis is now possible with non-invasive techniques such as ultrasound, computed tomography, as well as techniques to isolate the responsible germ, available in most centers.El síndrome de Fitz-Hugh-Curtis es una inflamación de la cápsula hepática, como complicación de una enfermedad inflamatoria pélvica, cuyo agente etiológico más frecuente es la C. trachomatis. La fase aguda del síndrome de Fitz-Hugh-Curtis puede presentarse con dolor en cuadrante superior derecho del abdomen, confundiéndose comúnmente con otras enfermedades hepatobiliares y del tracto gastrointestinal. El diagnóstico definitivo es posible hoy en día por técnicas no invasivas como ecografía, tomografía computarizada, además de técnicas para el aislamiento del germen responsable disponibles en la mayoría de los centros.

  12. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  13. Methods of computing Campbell-Hausdorff formula

    Science.gov (United States)

    Sogo, Kiyoshi

    2016-11-01

    A new method computing Campbell-Hausdorff formula is proposed by using quantum moment-cumulant relations, which is given by Weyl ordering symmetrization of classical moment-cumulant relations. The method enables one to readily use symbolic language software to compute arbitrary terms in the formula, and explicit expressions up to the 6-th order are obtained by the way of illustration. Further the symmetry Codd(A, B) = Codd(B, A), Ceven(A, B) = - Ceven(B, A) is found and proved. The operator differential method by Knapp is also examined for the comparison.

  14. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  15. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  16. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  17. Computer Vision Method in Human Motion Detection

    Institute of Scientific and Technical Information of China (English)

    FU Li; FANG Shuai; XU Xin-he

    2007-01-01

    Human motion detection based on computer vision is a frontier research topic and is causing an increasing attention in the field of computer vision research. The wavelet transform is used to sharpen the ambiguous edges in human motion image. The shadow's effect to the image processing is also removed. The edge extraction can be successfully realized.This is an effective method for the research of human motion analysis system.

  18. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  19. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    Science.gov (United States)

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  20. Radiologic diagnosis of Fitz-Hugh-Curtis syndrome

    Institute of Scientific and Technical Information of China (English)

    WANG Cheng-lin; GUO Xue-jun; YUAN Zhi-dong; SHI Qiao; HU Xiao-hong; FANG Lin

    2009-01-01

    @@ Litz-Hugh-Curtis syndrome (FHCS) was reported by Curtis after he found a fibrous adhesion between the surface of the liver and peritoneum in patients with gonococcal pelvic inflammation during laparoscopy in 1930, and the first report by Fitz-Hugh as acute gonococcal peritonitis in the right upper quadrant abdomen was published in 1934.1,2 The so-called FHCS is believed to originate from an inflammation in the pelvis which may ascend toward the diaphragmatic surface of the liver along the right paracolic gutters to cause the inflammation of the liver caosule with right upper abdominal pain.3-7

  1. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    2016-01-01

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  2. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  3. A method to compute periodic sums

    CERN Document Server

    Gumerov, Nail A

    2013-01-01

    In a number of problems in computational physics, a finite sum of kernel functions centered at $N$ particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum in to two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions ("sources") placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, compri...

  4. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  5. Jacobi method for signal subspace computation

    Science.gov (United States)

    Paul, Steffen; Goetze, Juergen

    1997-10-01

    The Jacobi method for singular value decomposition is well-suited for parallel architectures. Its application to signal subspace computations is well known. Basically the subspace spanned by singular vectors of large singular values are separated from subspace spanned by those of small singular values. The Jacobi algorithm computes the singular values and the corresponding vectors in random order. This requires sorting the result after convergence of the algorithm to select the signal subspace. A modification of the Jacobi method based on a linear objective function merges the sorting into the SVD-algorithm at little extra cost. In fact, the complexity of the diagonal processor cells in a triangular array get slightly larger. In this paper we present these extensions, in particular the modified algorithm for computing the rotation angles and give an example of its usefulness for subspace separation.

  6. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...

  7. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  8. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define...... a taxonomy of aspects around conservation, constraints and constitutive relations. Aspects of the ICAS-MoT toolbox are given to illustrate the functionality of a computer aided modelling tool, which incorporates an interface to MS Excel....

  9. Hugh Grant's Image Restoration Discourse: An Actor Apologizes.

    Science.gov (United States)

    Benoit, William L.

    1997-01-01

    Examines the strategies used by actor Hugh Grant (in his appearances on talk shows) to help restore his reputation after he was arrested for lewd behavior with a prostitute. Uses this case as a springboard to contrast entertainment image repair with political and corporate image repair, arguing that important situational differences can be…

  10. Additional Responses to Hugh Heclo's "On Thinking Institutionally"

    Science.gov (United States)

    Lincoln, Timothy D.; Fennell, Robert C.

    2011-01-01

    Issue 13:3 of this journal (July 2010) included a "Conversation" on Hugh Heclo's recent publication "On Thinking Institutionally" (Paradigm Publishers, 2008) with a book review by Robert Fennell and responses by Richard Ascough, Tat-siong Benny Liew, Michael McLain, and Lynne Westfield. Here we publish two additional responses to this same book.…

  11. Vernon Hughes and the Quest for the Proton's Spin

    Science.gov (United States)

    Jaffe, Robert L.

    2004-12-01

    Vernon Hughes dedicated much of the latter part of his career to the question "What carries the spin of the proton?" The question remains unanswered and near the top of the list of fascinating questions in QCD. I present a perspective on the question and Vernon's pursuit of an answer.

  12. Dr. Vernon W. Hughes, 81, authority on the subatomic

    CERN Multimedia

    Lavietes, S

    2002-01-01

    "Dr. Vernon W. Hughes, a Yale physicist whose investigation of particles called muons poked holes in standard subatomic theory and provided evidence for the existence of previously undetected matter, died at Yale-New Haven Hospital last Tuesday" (1/2 page).

  13. Hugh Hefner - maailma esi-playboy / Neeme Raud

    Index Scriptorium Estoniae

    Raud, Neeme, 1969-

    2004-01-01

    Ajakirja Playboy asutajast Hugh Hefner'ist, tema elufilosoofiast ja põhimõtetest erootikaväljaande sisu kujundamisel. Ärijuhi Christie Hefner'i sõnul on rahvusvahelise multimeedia-meelelahutusfirma praeguseks eesmärgiks brändi sobitamine erinevatesse meediavormidesse. Lisa: Mis on Playboy?

  14. Differential Response: What to Make of the Existing Research? A Response to Hughes et al.

    Science.gov (United States)

    Drake, Brett

    2013-01-01

    This article is a response to "Issues in Differential Response", a review of the current evidence pertaining to differential response (DR) programs in child protective services (CPS). In my view, the Hughes, Rycus, Saunders-Adams, Hughes, and Hughes article suffers from several weaknesses. First, DR programs are critiqued as if they were…

  15. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  16. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  17. Shifted power method for computing tensor eigenpairs.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  18. Computational Methods for MOF/Polymer Membranes.

    Science.gov (United States)

    Erucar, Ilknur; Keskin, Seda

    2016-04-01

    Metal-organic framework (MOF)/polymer mixed matrix membranes (MMMs) have received significant interest in the last decade. MOFs are incorporated into polymers to make MMMs that exhibit improved gas permeability and selectivity compared with pure polymer membranes. The fundamental challenge in this area is to choose the appropriate MOF/polymer combinations for a gas separation of interest. Even if a single polymer is considered, there are thousands of MOFs that could potentially be used as fillers in MMMs. As a result, there has been a large demand for computational studies that can accurately predict the gas separation performance of MOF/polymer MMMs prior to experiments. We have developed computational approaches to assess gas separation potentials of MOF/polymer MMMs and used them to identify the most promising MOF/polymer pairs. In this Personal Account, we aim to provide a critical overview of current computational methods for modeling MOF/polymer MMMs. We give our perspective on the background, successes, and failures that led to developments in this area and discuss the opportunities and challenges of using computational methods for MOF/polymer MMMs.

  19. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  20. Authentication Methods in Cloud Computing: A Survey

    Directory of Open Access Journals (Sweden)

    Mahnoush Babaeizadeh

    2015-03-01

    Full Text Available This study presents a review on the various methods of authentication in cloud environment. Authentication plays an important role in security of Cloud Computing (CC. It protects Cloud Service Providers (CSP against various types of attacks, where the aim is to verify a user’s identity when a user wishes to request services from cloud servers. There are multiple authentication technologies that verify the identity of a user before granting access to resources.

  1. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  2. COMPUTATIONALLY EFFICIENT ESPRIT METHOD FOR DIRECTION FINDING

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, a low complexity ESPRIT algorithm based on power method and Orthogonal-triangular (QR) decomposition is presented for direction finding, which does not require a priori knowledge of source number and the predetermined threshold (separates the signal and noise eigen-values). Firstly, according to the estimation of noise subspace obtained by the power method, a novel source number detection method without eigen-decomposition is proposed based on QR decomposition. Furthermore, the eigenvectors of signal subspace can be determined according to Q matrix,and then the directions of signals could be computed by the ESPRIT algorithm. To determine the source number and subspace, the computation complexity of the proposed algorithm is approximated as (2log2 n+ 2.67)M3, where n is the power of covariance matrix and M is the number of array elements. Compared with the Single Vector Decomposition (SVD) based algorithm, it has a substantial computational saving with the approximation performance. The simulation results demonstrate its effectiveness and robustness.

  3. Accelerated Matrix Element Method with Parallel Computing

    CERN Document Server

    Schouten, Doug; Stelzer, Bernd

    2014-01-01

    The matrix element method utilizes ab initio calculations of probability densities as powerful discriminants for processes of interest in experimental particle physics. The method has already been used successfully at previous and current collider experiments. However, the computational complexity of this method for final states with many particles and degrees of freedom sets it at a disadvantage compared to supervised classification methods such as decision trees, k nearest-neighbour, or neural networks. This note presents a concrete implementation of the matrix element technique using graphics processing units. Due to the intrinsic parallelizability of multidimensional integration, dramatic speedups can be readily achieved, which makes the matrix element technique viable for general usage at collider experiments.

  4. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  5. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model.

    Science.gov (United States)

    Jensen, Anders Chr; Ditlevsen, Susanne; Kessler, Mathieu; Papaspiliopoulos, Omiros

    2012-10-01

    Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate model parameters from experimental data. This is not an easy task because of the inherent nonlinearity necessary to produce the excitable dynamics, and because the two coordinates of the model are moving on different time scales. Here we propose a Bayesian framework for parameter estimation, which can handle multidimensional nonlinear diffusions with large time scale separation. The estimation method is illustrated on simulated data.

  6. Meshfree methods for computational fluid dynamics

    Directory of Open Access Journals (Sweden)

    Jícha M.

    2013-04-01

    Full Text Available The paper deals with the convergence problem of the SPH (Smoothed Particle Hydrodynamics meshfree method for the solution of fluid dynamics tasks. In the introductory part, fundamental aspects of mesh- free methods, their definition, computational approaches and classification are discussed. In the following part, the methods of local integral representation, where SPH belongs are analyzed and specifically the method RKPM (Reproducing Kernel Particle Method is described. In the contribution, also the influence of boundary conditions on the SPH approximation consistence is analyzed, which has a direct impact on the convergence of the method. A classical boundary condition in the form of virtual particles does not ensure a sufficient order of consistence near the boundary of the definition domain of the task. This problem is solved by using ghost particles as a boundary condition, which was implemented into the SPH code as part of this work. Further, several numerical aspects linked with the SPH method are described. In the concluding part, results are presented of the application of the SPH method with ghost particles to the 2D shock tube example. Also results of tests of several parameters and modifications of the SPH code are shown.

  7. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  8. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  9. Langston Hughes and his poem “Harlem”

    Institute of Scientific and Technical Information of China (English)

    郝红

    2005-01-01

    James Langston Hughes was born February 1. 1902, in Joplin, Missouri. His parents divorced when he was a small child, and his father moved to Mexico. He was raised by his Grandmother until he was thirteen, when he moved to Lincoln,Illinois, to live with his mother and her husband, eventually settling in Cleveland, Ohio. It was in I,incoln, Illinois, that Hughes began writing poetry. Following graduation, he spent a year in Mexico and a year at Columbia University. During these years, he held old jobs as an assistant cook, launderer, and a busboy, and traveled to Africa and Europe working as a seaman. In November 1924, he moved to Washington,

  10. Coastal Storm Surge Analysis: Computational System, Report 2: Intermediate Submission No. 1.2

    Science.gov (United States)

    2011-03-01

    Stillwell Renaissance Computing Institute 100 Europa Drive, Suite 540 Chapel Hill, NC 27517 Hugh Roberts, John Atkinson, Shan Zou ARCADIS 4999...finite element model. Proc., XIII Conf. on Computational Methods in Water Resources, Vol. II, L. Bentley, J. Sykes, C. Brebbia, W. Gray, and G. Pinder...PERFORMING ORGANIZATION REPORT NUMBER Renaissance Computing Institute 100 Europa Drive, Suite 540, Chapel Hill, NC 27517; ARCADIS 4999 Pearl East

  11. Fitz-Hugh-Curtis综合征7例

    Institute of Scientific and Technical Information of China (English)

    焦浦生

    2000-01-01

    @@ FHC综合征Fitz-Hugh-Curtis Syntrome)是继发于盆腔感染的肝包膜炎症反应性疾病.该综合征典型临床表现是右上腹痛,发热伴有盆腔炎.现将我院近年来诊治FHC综合征7例总结中下.

  12. Bifurcaciones del Sistema de FitzHugh-Nagumo (FHN

    Directory of Open Access Journals (Sweden)

    Fernando Ongay Larios

    2011-01-01

    Full Text Available La familia paramétrica de los sistemas de FitzHugh-Nagumo es rica en bifurcaciones (Rocsoreanu et al., 2000. En este artículo estudiamos las bifurcaciones silla-nodo y de Hopf desde el punto de vista matemático de esta familia y se describen completamente los conjuntos de bifurcación en el espacio de parámetros.

  13. The Spin Structure of the Nucleon:. a Hughes Legacy

    Science.gov (United States)

    Cates, Gordon D.

    2004-12-01

    More than any other individual, Vernon Hughes can be pointed to as the father of the experimental investigation of nucleon spin structure. Even theoretical development in this area was spurred on by Vernon's pioneering efforts to make the control of spin degrees of freedom an experimental reality. This talk traces some of Vernon's work in this area, as well as examining, briefly and not in a complete fashion, some of the other work that can be looked upon as Vernon's legacy.

  14. Remembrance of Hugh E. Huxley, a founder of our field.

    Science.gov (United States)

    Pollard, Thomas D; Goldman, Yale E

    2013-09-01

    Hugh E. Huxley (1924-2013) carried out structural studies by X-ray fiber diffraction and electron microscopy that established how muscle contracts. Huxley's sliding filament mechanism with an ATPase motor protein taking steps along an actin filament, established the paradigm not only for muscle contraction but also for other motile systems using actin and unconventional myosins, microtubules and dynein and microtubules and kinesin.

  15. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  16. Comparison of Methods of Height Anomaly Computation

    Science.gov (United States)

    Mazurova, E.; Lapshin, A.; Menshova, A.

    2012-04-01

    As of today, accurate determination of height anomaly is one of the most difficult problems of geodesy, even with sustainable perfection of mathematical methods, computer possibilities. The most effective methods of height anomaly computation are based on the methods of discrete linear transformations, such as the Fast Fourier Transform (FFT), Short-Time Fourier Transform (STFT), Fast Wavelet Transform (FWT). The main drawback of the classical FFT is weak localization in the time domain. If it is necessary to define the time interval of a frequency presence the STFT is used that allows one to detect the presence of any frequency signal and the interval of its presence. It expands the possibilities of the method in comparison with the classical Fourier Transform. However, subject to Heisenberg's uncertainty principle, it is impossible to tell precisely what frequency signal is present at a given moment of time (it is possible to speak only about the range of frequencies); and it is impossible to tell at what precisely moment of time the frequency signal is present (it is possible to speak only about a time span). A wavelet-transform gives the chance to reduce the influence of the Heisenberg's uncertainty principle on the obtained time-and-frequency representation of the signal. With its help low frequencies have more detailed representation relative to the time, and high frequencies - relative to the frequency. The paper summarizes the results of height anomaly calculations done by the FFT, STFT, FWT methods and represents 3-D models of calculation results. Key words: Fast Fourier Transform(FFT), Short-Time Fourier Transform (STFT), Fast Wavelet Transform(FWT), Heisenberg's uncertainty principle.

  17. Robust level set method for computer vision

    Science.gov (United States)

    Si, Jia-rui; Li, Xiao-pei; Zhang, Hong-wei

    2005-12-01

    Level set method provides powerful numerical techniques for analyzing and solving interface evolution problems based on partial differential equations. It is particularly appropriate for image segmentation and other computer vision tasks. However, there exists noise in every image and the noise is the main obstacle to image segmentation. In level set method, the propagation fronts are apt to leak through the gaps at locations of missing or fuzzy boundaries that are caused by noise. The robust level set method proposed in this paper is based on the adaptive Gaussian filter. The fast marching method provides a fast implementation for level set method and the adaptive Gaussian filter can adapt itself to the local characteristics of an image by adjusting its variance. Thus, the different parts of an image can be smoothed in different way according to the degree of noisiness and the type of edges. Experiments results demonstrate that the adaptive Gaussian filter can greatly reduce the noise without distorting the image and made the level set methods more robust and accurate.

  18. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume...... of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step....... Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total...

  19. Computer-Aided Drug Design Methods.

    Science.gov (United States)

    Yu, Wenbo; MacKerell, Alexander D

    2017-01-01

    Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.

  20. A computational method for determining XBT depths

    Directory of Open Access Journals (Sweden)

    J. Stark

    2011-08-01

    Full Text Available A new technique for determining the depth of expendable bathythermographs (XBTs is developed. This new method combines a forward-stepping calculation which incorporates all of the forces on the XBT devices during their descent. Of particular note are drag forces which are calculated using a new drag coefficient expression. That expression, obtained entirely from computational fluid dynamic modeling, accounts for local variations in the ocean environment. Consequently, the method allows for accurate determination of depths for any local temperature environment. The results, which are entirely based on numerical simulation, are compared with an experimental descent of an LM-Sippican T-5 XBT. It is found that the calculated depths differ by less than 3 % from depth estimates using the industry standard FRE. Furthermore, the differences decrease with depth. The computational model allows an investigation of the fluid patterns along the outer surface of the probe as well as in the interior channel. The simulations take account of complex flow phenomena such as laminar-turbulent transition and flow separation.

  1. A computational method for determining XBT depths

    Directory of Open Access Journals (Sweden)

    J. Stark

    2011-11-01

    Full Text Available A new technique for determining the depth of expendable bathythermographs (XBTs is developed. This new method uses a forward-stepping calculation which incorporates all of the forces on the XBT devices during their descent. Of particular note are drag forces which are calculated using a new drag coefficient expression. That expression, obtained entirely from computational fluid dynamic modeling, accounts for local variations in the ocean environment. Consequently, the method allows for accurate determination of depths for any local temperature environment. The results, which are entirely based on numerical simulation, are compared with the experiments of LM Sippican T-5 XBT probes. It is found that the calculated depths differ by less than 3% from depth estimates using the standard fall-rate equation (FRE. Furthermore, the differences decrease with depth. The computational model allows an investigation of the fluid flow patterns along the outer surface of the probe as well as in the interior channel. The simulations take account of complex flow phenomena such as laminar-turbulent transition and flow separation.

  2. A computational method for sharp interface advection

    Science.gov (United States)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM extension and is published as open source.

  3. New numerical analysis method in computational mechanics: composite element method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new type of FEM, called CEM (composite element method), is proposed to solve the static and dynamic problems of engineering structures with high accuracy and efficiency. The core of this method is to define two sets of coordinate systems for DOF's description after discretizing the structure, i.e. the nodal coordinate system UFEM(ξ) for employing the conventional FEM, and the field coordinate system UCT(ξ) for utilizing classical theory. Then, coupling these two sets of functional expressions could obtain the composite displacement field U(ξ) of CEM. The computations of the stiffness and mass matrices can follow the conventional procedure of FEM. Since the CEM inherents some good properties of the conventional FEM and classical analytical method, it has the powerful versatility to various complex geometric shapes and excellent approximation. Many examples are presented to demonstrate the ability of CEM.

  4. New numerical analysis method in computational mechanics: composite element method

    Institute of Scientific and Technical Information of China (English)

    曾攀

    2000-01-01

    A new type of FEM, called CEM (composite element method), is proposed to solve the static and dynamic problems of engineering structures with high accuracy and efficiency. The core of this method is to define two sets of coordinate systems for DOF’ s description after discretizing the structure, i.e. the nodal coordinate system UFEM(ζ) for employing the conventional FEM, and the field coordinate system UCT(ζ) for utilizing classical theory. Then, coupling these two sets of functional expressions could obtain the composite displacement field U(ζ) of CEM. The computations of the stiffness and mass matrices can follow the conventional procedure of FEM. Since the CEM inherents some good properties of the conventional FEM and classical analytical method, it has the powerful versatility to various complex geometric shapes and excellent approximation. Many examples are presented to demonstrate the ability of CEM.

  5. Computational methods applied to wind tunnel optimization

    Science.gov (United States)

    Lindsay, David

    This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping

  6. A Computational Method for Sharp Interface Advection

    CERN Document Server

    Roenby, Johan; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists in two parts: First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple 2D and 3D interface advection problems ...

  7. Computational Evaluation of the Traceback Method

    Science.gov (United States)

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  8. Computational methods for Gene Orthology inference

    Science.gov (United States)

    Kristensen, David M.; Wolf, Yuri I.; Mushegian, Arcady R.

    2011-01-01

    Accurate inference of orthologous genes is a pre-requisite for most comparative genomics studies, and is also important for functional annotation of new genomes. Identification of orthologous gene sets typically involves phylogenetic tree analysis, heuristic algorithms based on sequence conservation, synteny analysis, or some combination of these approaches. The most direct tree-based methods typically rely on the comparison of an individual gene tree with a species tree. Once the two trees are accurately constructed, orthologs are straightforwardly identified by the definition of orthology as those homologs that are related by speciation, rather than gene duplication, at their most recent point of origin. Although ideal for the purpose of orthology identification in principle, phylogenetic trees are computationally expensive to construct for large numbers of genes and genomes, and they often contain errors, especially at large evolutionary distances. Moreover, in many organisms, in particular prokaryotes and viruses, evolution does not appear to have followed a simple ‘tree-like’ mode, which makes conventional tree reconciliation inapplicable. Other, heuristic methods identify probable orthologs as the closest homologous pairs or groups of genes in a set of organisms. These approaches are faster and easier to automate than tree-based methods, with efficient implementations provided by graph-theoretical algorithms enabling comparisons of thousands of genomes. Comparisons of these two approaches show that, despite conceptual differences, they produce similar sets of orthologs, especially at short evolutionary distances. Synteny also can aid in identification of orthologs. Often, tree-based, sequence similarity- and synteny-based approaches can be combined into flexible hybrid methods. PMID:21690100

  9. The synchronization of FitzHugh-Nagumo neuron network coupled by gap junction

    Institute of Scientific and Technical Information of China (English)

    Zhan Yong; Zhang Su-Hua; Zhao Tong-Jun; An Hai-Long; Zhang Zhen-Dong; Han Ying-Rong; Liu Hui; Zhang Yu-Hong

    2008-01-01

    It is well known that the strong coupling can synchronize a network of nonlinear oscillators. Synchronization provides the basis of the remarkable computational performance of the brain. In this paper the FitzHugh-Nagumo neuron network is constructed. The dependence of the synchronization on the coupling strength, the noise intensity and the size of the neuron network has been discussed. The results indicate that the coupling among neurons works to improve the synchronization, and noise increases the neuron random dynamics and the local fluctuations; the larger the size of network, the worse the synchronization. The dependence of the synchronization on the strength of the electric synapse coupling and chemical synapse coupling has also been discussed, which proves that electric synapse coupling can enhance the synchronization of the neuron network largely.

  10. Hugh de Wardener - the Man and the Scientist.

    Science.gov (United States)

    Phillips, M E; de Wardener, S

    2016-02-01

    Hugh de Wardener died on 29th September 2013, ten days before his 98th birthday. He had a diverse upbringing and qualified in Medicine in 1939. He joined the army but was captured in 1942 and imprisoned in Singapore and Thailand until 1945. His clinical care of fellow prisoners was highly regarded. He preserved their clinical records and used them, post-war, to write two Lancet papers. One showed, for the first time, that Wernickes encephalopathy could be caused by severe malnutrition and cured by small doses of vitamin B1. His later academic interests were based on the emphasis he placed on renal physiology. This applied to the topic most associated with his name-Natriuretic Hormone. Whilst de Wardener never isolated this hormone, his early experiments, demonstrating that a third factor other than GFR and aldosterone affected renal sodium transport, were substantiated by others. Hugh had many research interests: pyelonephritis, renal histology, maintenance dialysis and metabolic/renal bone disease. In his later years he researched intensively into the role of sodium and salt in the aetiology of essential hypertension. Hugh was president of the International Society of Nephrology (1969-72) and the UK Renal Association (1975-78). He received many awards and recognitions from across the world, many of them after his (so-called) retirement. Throughout his career he never neglected the care of his patients. As Bob Schrier wrote in his obituary of de Wardener in Kidney International he was a caring physicianwhose dedication to his patients welfare was exemplary.

  11. Computational Studies of Protein Hydration Methods

    Science.gov (United States)

    Morozenko, Aleksandr

    It is widely appreciated that water plays a vital role in proteins' functions. The long-range proton transfer inside proteins is usually carried out by the Grotthuss mechanism and requires a chain of hydrogen bonds that is composed of internal water molecules and amino acid residues of the protein. In other cases, water molecules can facilitate the enzymes catalytic reactions by becoming a temporary proton donor/acceptor. Yet a reliable way of predicting water protein interior is still not available to the biophysics community. This thesis presents computational studies that have been performed to gain insights into the problems of fast and accurate prediction of potential water sites inside internal cavities of protein. Specifically, we focus on the task of attainment of correspondence between results obtained from computational experiments and experimental data available from X-ray structures. An overview of existing methods of predicting water molecules in the interior of a protein along with a discussion of the trustworthiness of these predictions is a second major subject of this thesis. A description of differences of water molecules in various media, particularly, gas, liquid and protein interior, and theoretical aspects of designing an adequate model of water for the protein environment are widely discussed in chapters 3 and 4. In chapter 5, we discuss recently developed methods of placement of water molecules into internal cavities of a protein. We propose a new methodology based on the principle of docking water molecules to a protein body which allows to achieve a higher degree of matching experimental data reported in protein crystal structures than other techniques available in the world of biophysical software. The new methodology is tested on a set of high-resolution crystal structures of oligopeptide-binding protein (OppA) containing a large number of resolved internal water molecules and applied to bovine heart cytochrome c oxidase in the fully

  12. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  13. Identification of the FitzHugh-Nagumo Model Dynamics via Deterministic Learning

    Science.gov (United States)

    Dong, Xunde; Wang, Cong

    In this paper, a new method is proposed for the identification of the FitzHugh-Nagumo (FHN) model dynamics via deterministic learning. The FHN model is a classic and simple model for studying spiral waves in excitable media, such as the cardiac tissue, biological neural networks. Firstly, the FHN model described by partial differential equations (PDEs) is transformed into a set of ordinary differential equations (ODEs) by using finite difference method. Secondly, the dynamics of the ODEs is identified using the deterministic learning theory. It is shown that, for the spiral waves generated by the FHN model, the dynamics underlying the recurrent trajectory corresponding to any spatial point can be accurately identified by using the proposed approach. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  14. Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics

    Science.gov (United States)

    2013-12-01

    action, volume 53 of Lecture Notes in Computational Science and Engineering, 82–100, Springer , 2006. [24] K.-U. Bletzinger, R. Wuchner, and A. Kupzok...Lecture Notes in Computational Sci- ence and Engineering, 336–355, Springer , 2006. [25] Y. Bazilevs, V.M. Calo, T.J.R. Hughes, and Y. Zhang...structure interaction: Methods and applica- tion to cerebral aneurysms”, Biomechanics and Modeling in Mechanobiology, 9 (2010) 481–498. [32] Y. Bazilevs, M.-C

  15. Novel Principles and Methods for Computing with Attractors

    Directory of Open Access Journals (Sweden)

    Horia-Nicolai Teodorescu

    2001-08-01

    Full Text Available We briefly analyze several issues related to the "computing with attractors" domain. We present a point of view on the topic and several new concepts, methods, and techniques for computing with attractors. We discuss applications where this method may prove useful. We answer several questions related to the usefulness of this computing paradigm.

  16. Computational methods for corpus annotation and analysis

    CERN Document Server

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  17. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  18. Programs for Use in Teaching Research Methods for Small Computers

    Science.gov (United States)

    Halley, Fred S.

    1975-01-01

    Description of Sociology Library (SOLIB), presented as a package of computer programs designed for smaller computers used in research methods courses and by students performing independent research. (Author/ND)

  19. Surviving To Write and Writing To Survive: The Complex Case of Langston Hughes.

    Science.gov (United States)

    Ostrom, Hans

    Studying the life of Langston Hughes in the context of how to teach freshman composition can shed light on two sometimes conflicting pedagogies, the expressivist and the social-constructionist. A discouraging period of fierce criticism, illness, depression, and financial woes coincided with Hughes' 39th birthday, which his biographer Arnold…

  20. Stochastic bifurcation in FitzHugh Nagumo ensembles subjected to additive and/or multiplicative noises

    Science.gov (United States)

    Hasegawa, Hideo

    2008-02-01

    We have studied the dynamical properties of finite N-unit FitzHugh-Nagumo (FN) ensembles subjected to additive and/or multiplicative noises, reformulating the augmented moment method (AMM) with the Fokker-Planck equation (FPE) method [H. Hasegawa, J. Phys. Soc. Japan 75 (2006) 033001]. In the AMM, original 2N-dimensional stochastic equations are transformed to eight-dimensional deterministic ones, and the dynamics is described in terms of averages and fluctuations of local and global variables. The stochastic bifurcation is discussed by a linear stability analysis of the deterministic AMM equations. The bifurcation transition diagram of multiplicative noise is rather different from that of additive noise: the former has the wider oscillating region than the latter. The synchronization in globally-coupled FN ensembles is also investigated. Results of the AMM are in good agreement with those of direct simulations (DSs).

  1. Computational structural mechanics methods research using an evolving framework

    Science.gov (United States)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  2. The Progression About Fitz-Hugh-Curtis Syndrome Study%Fitz-Hugh-Curtis综合征研究进展

    Institute of Scientific and Technical Information of China (English)

    靳翠平; 尚玉敏

    2009-01-01

    Fitz-Hugh-Curtis综合征是盆腔感染合并肝周围炎,主要涉及肝包膜而无肝实质损害,在慢性盆腔炎患者中较为常见.病原菌主要是淋病奈瑟菌及衣原体,主要临床表现为右下腹痛,并可导致慢性腹痛、不孕、异位妊娠等多种并发症.可结合腹腔CT、相关实验室检验及腹腔镜检查对其进行诊断及鉴别诊断,并通过敏感抗生素及腹腔镜手术进行有效治疗.就Fitz-Hugh-Curtis综合征病因与发病机制、临床特征、并发症、诊断、治疗及相关意义进行综述.

  3. New computation methods for geometrical optics

    CERN Document Server

    Lin, Psang Dain

    2014-01-01

    This book employs homogeneous coordinate notation to compute the first- and second-order derivative matrices of various optical quantities. It will be one of the important mathematical tools for automatic optical design. The traditional geometrical optics is based on raytracing only. It is very difficult, if possible, to compute the first- and second-order derivatives of a ray and optical path length with respect to system variables, since they are recursive functions. Consequently, current commercial software packages use a finite difference approximation methodology to estimate these derivatives for use in optical design and analysis. Furthermore, previous publications of geometrical optics use vector notation, which is comparatively awkward for computations for non-axially symmetrical systems.

  4. Computer Literacy Systematic Literature Review Method

    NARCIS (Netherlands)

    Kegel, Roeland Hendrik,Pieter; Barth, Susanne; Klaassen, Randy; Wieringa, Roelf J.

    2017-01-01

    Although there have been many attempts to define the concept `computer literacy', no consensus has been reached: many variations of the concept exist within literature. The majority of papers does not explicitly define the concept at all, instead using an unjustified subset of elements related to co

  5. Parallel computation with the spectral element method

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong

    1995-12-01

    Spectral element models for the shallow water equations and the Navier-Stokes equations have been successfully implemented on a data parallel supercomputer, the Connection Machine model CM-5. The nonstaggered grid formulations for both models are described, which are shown to be especially efficient in data parallel computing environment.

  6. A Powerful Friendship: Theodore von Karman and Hugh L. Dryden

    Science.gov (United States)

    Gorn, Michael

    2003-01-01

    During their long personal friendship and professional association, Theodore von Karman (1882-1963) and Hugh L. Dryden (1898-1965) exercised a pivotal if somewhat elusive influence over American aeronautics and spaceflight. Both decisive figures in organizing scientists and engineers at home and abroad, both men of undisputed eminence in their technical fields, their range of contacts in government, academia, the armed forces, industry, and professional societies spanned the globe to an extent unparalleled then as now. Moreover, because they coordinated their activities closely, their combined influence far exceeded the sum of each one s individual contributions. This paper illustrates their personal origins as well as the foundations of their friendship, how their relationship became a professional alliance, and their joint impact on the world of aeronautics and astronautics during the twentieth century.

  7. Realistic model of compact VLSI FitzHugh-Nagumo oscillators

    Science.gov (United States)

    Cosp, Jordi; Binczak, Stéphane; Madrenas, Jordi; Fernández, Daniel

    2014-02-01

    In this article, we present a compact analogue VLSI implementation of the FitzHugh-Nagumo neuron model, intended to model large-scale, biologically plausible, oscillator networks. As the model requires a series resistor and a parallel capacitor with the inductor, which is the most complex part of the design, it is possible to greatly simplify the active inductor implementation compared to other implementations of this device as typically found in filters by allowing appreciable, but well modelled, nonidealities. We model and obtain the parameters of the inductor nonideal model as an inductance in series with a parasitic resistor and a second order low-pass filter with a large cut-off frequency. Post-layout simulations for a CMOS 0.35 μm double-poly technology using the MOSFET Spice BSIM3v3 model confirm the proper behaviour of the design.

  8. Ideal and computer mathematics applied to meshfree methods

    Science.gov (United States)

    Kansa, E.

    2016-10-01

    Early numerical methods to solve ordinary and partial differential relied upon human computers who used mechanical devices. The algorithms used changed little over the evolution of electronic computers having only low order convergence rates. A meshfree scheme was developed for problems that converges exponentially using the latest computational science toolkit.

  9. Computational Methods for Design, Control and Optimization

    Science.gov (United States)

    2007-10-01

    34scenario" that applies to channel flows ( Poiseuille flows , Couette flow ) and pipe flows . Over the past 75 years many complex "transition theories" have...other areas of flow control, optimization and aerodynamic design. approximate sensitivity calculations and optimization codes. The effort was built on a...for fluid flow problems. The improved robustness and computational efficiency of this approach makes it practical for a wide class of problems. The

  10. Computational Methods for Material Failure Processes

    Science.gov (United States)

    1994-02-01

    Belytschko, "Advances in Computational Mechanics," Nuclear Engineering and Design, 134, pp. 1-22, 1992. T. Belytschko and N. D. Gilbertsen, "Implementtion...band along the normal direction. 50 N4 N3 N4 N? N3 N4 N3 .41 Fision NS ’, Mz 46 Fusion NI NZ NI NS NZ NI NZ iM MI MZ Mi NI NZ Ni N3 NZ NI NZ Fig. 2.1

  11. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  12. Computational complexity for the two-point block method

    Science.gov (United States)

    See, Phang Pei; Majid, Zanariah Abdul

    2014-12-01

    In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

  13. Near threshold computing technology, methods and applications

    CERN Document Server

    Silvano, Cristina

    2016-01-01

    This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage.  Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors.  Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips.  ·         Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; ·         Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; ·         Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes.  .

  14. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  15. Analytical and computational methods in electromagnetics

    CERN Document Server

    Garg, Ramesh

    2008-01-01

    This authoritative resource offers you clear and complete explanation of this essential electromagnetics knowledge, providing you with the analytical background you need to understand such key approaches as MoM (method of moments), FDTD (Finite Difference Time Domain) and FEM (Finite Element Method), and Green's functions. This comprehensive book includes all math necessary to master the material.

  16. Basic Methods for Computing Special Functions

    NARCIS (Netherlands)

    Gil, A.; Segura, J.; Temme, N.M.; Simos, T.E.

    2011-01-01

    This paper gives an overview of methods for the numerical evaluation of special functions, that is, the functions that arise in many problems from mathematical physics, engineering, probability theory, and other applied sciences. We consider in detail a selection of basic methods which are frequent

  17. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  18. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  19. 12 CFR 227.25 - Unfair balance computation method.

    Science.gov (United States)

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  20. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  1. Computational Methods for Modification of Metabolic Networks

    Directory of Open Access Journals (Sweden)

    Takeyuki Tamura

    2015-01-01

    Full Text Available In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA, elementary mode (EM, and Boolean models. Minimum Reaction Insertion (MRI problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed.

  2. Computational Intelligence Characterization Method of Semiconductor Device

    CERN Document Server

    Liau, Eric

    2011-01-01

    Characterization of semiconductor devices is used to gather as much data about the device as possible to determine weaknesses in design or trends in the manufacturing process. In this paper, we propose a novel multiple trip point characterization concept to overcome the constraint of single trip point concept in device characterization phase. In addition, we use computational intelligence techniques (e.g. neural network, fuzzy and genetic algorithm) to further manipulate these sets of multiple trip point values and tests based on semiconductor test equipments, Our experimental results demonstrate an excellent design parameter variation analysis in device characterization phase, as well as detection of a set of worst case tests that can provoke the worst case variation, while traditional approach was not capable of detecting them.

  3. Basic Methods for Computing Special Functions

    OpenAIRE

    Gil, Amparo; Segura, Javier; Temme, Nico; Simos, T. E.

    2011-01-01

    This paper gives an overview of methods for the numerical evaluation of special functions, that is, the functions that arise in many problems from mathematical physics, engineering, probability theory, and other applied sciences. We consider in detail a selection of basic methods which are frequently used in the numerical evaluation of special functions: converging and asymptotic series, including Chebyshev expansions, linear recurrence relations, and numerical quadrature. Several other metho...

  4. COMSAC: Computational Methods for Stability and Control. Part 1

    Science.gov (United States)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  5. Parallel Computing Methods For Particle Accelerator Design

    CERN Document Server

    Popescu, Diana Andreea; Hersch, Roger

    We present methods for parallelizing the transport map construction for multi-core processors and for Graphics Processing Units (GPUs). We provide an efficient implementation of the transport map construction. We describe a method for multi-core processors using the OpenMP framework which brings performance improvement over the serial version of the map construction. We developed a novel and efficient algorithm for multivariate polynomial multiplication for GPUs and we implemented it using the CUDA framework. We show the benefits of using the multivariate polynomial multiplication algorithm for GPUs in the map composition operation for high orders. Finally, we present an algorithm for map composition for GPUs.

  6. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  7. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.

  8. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  9. A connectionist computational method for face recognition

    Directory of Open Access Journals (Sweden)

    Pujol Francisco A.

    2016-06-01

    Full Text Available In this work, a modified version of the elastic bunch graph matching (EBGM algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

  10. Computational Methods for Physicists Compendium for Students

    CERN Document Server

    Sirca, Simon

    2012-01-01

    This book helps advanced undergraduate, graduate and postdoctoral students in their daily work by offering them a compendium of numerical methods. The choice of methods pays  significant attention to error estimates, stability and convergence issues as well as to the ways to optimize program execution speeds. Many examples are given throughout the chapters, and each chapter is followed by at least a handful of more comprehensive problems which may be dealt with, for example, on a weekly basis in a one- or two-semester course. In these end-of-chapter problems the physics background is pronounced, and the main text preceding them is intended as an introduction or as a later reference. Less stress is given to the explanation of individual algorithms. It is tried to induce in the reader an own independent thinking and a certain amount of scepticism and scrutiny instead of blindly following readily available commercial tools.

  11. Computational Methods for Probabilistic Target Tracking Problems

    Science.gov (United States)

    2007-09-01

    Undergraduate Students: Ms. Angela Edwards, Mr. Bryahn Ivery, Mr. Dustin Lupton, Mr. James Pender, Mr. Terrell Felder , Ms. Krystal Knight Under...two more graduate students, Mr. Ricardo Bernal and Ms Alisha Williams, and two more undergraduate students, Ms Krystal Knight and Mr. Terrell Felder ...Technical State University, April 24, 2006 “Using Tree Based Methods to Classify Messages”, Terrell A. Felder , Math Awareness Mini-Conference

  12. Computing Method of Forces on Rivet

    Directory of Open Access Journals (Sweden)

    Ion DIMA

    2014-03-01

    Full Text Available This article aims to provide a quick methodology of forces calculation on rivet in single shear using the finite element method (FEM – NASTRAN/PATRAN. These forces can be used for the calculus of bearing, inter rivet buckling and riveting check. For this method to be efficient and fast, a macro has been developed based on this methodology described in the article. The macro was wrote in Visual Basic with Excel interface. In the beginning phase of any aircraft project, when the rivets type and position are not yet precisely known, the modelling of rivets, as attachment elements between items, is made node on node in the finite element model, without taking account of the rivets position. Although the rivets are not modelled in the finite element model, this method together with the macro enable a quick extraction and calculation of the forces on the rivet. This calculation of forces on rivet is intended to critical case, selected from the stress plots of NASTRAN for max. /min. principal stress and shear.

  13. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  14. An Alternate Method for Computation of Transfer Function Matrix

    Directory of Open Access Journals (Sweden)

    Appukuttan K. K.

    2010-01-01

    Full Text Available A direct and simple numerical method is presented for calculating the transfer function matrix of a linear time invariant multivariable system (A, B, C. The method is based on the matrix-determinant identity, and it involves operations with an auxiliary vector on the matrices. The method is computationally faster compared to Liverrier and Danilevsky methods.

  15. Hughes, Twain, Child, and Sanger: Four Who Locked Horns with the Censors

    Science.gov (United States)

    Meltzer, Milton

    1969-01-01

    A look at the lives and conflicts of four writers--Langston Hughes, Mark Twain, Lydia Maria Child, and Margaret Sanger--who faced public criticism and censorship because oftheir views on controversial issues. (RM)

  16. MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...

  17. Coarse-graining methods for computational biology.

    Science.gov (United States)

    Saunders, Marissa G; Voth, Gregory A

    2013-01-01

    Connecting the molecular world to biology requires understanding how molecular-scale dynamics propagate upward in scale to define the function of biological structures. To address this challenge, multiscale approaches, including coarse-graining methods, become necessary. We discuss here the theoretical underpinnings and history of coarse-graining and summarize the state of the field, organizing key methodologies based on an emerging paradigm for multiscale theory and modeling of biomolecular systems. This framework involves an integrated, iterative approach to couple information from different scales. The primary steps, which coincide with key areas of method development, include developing first-pass coarse-grained models guided by experimental results, performing numerous large-scale coarse-grained simulations, identifying important interactions that drive emergent behaviors, and finally reconnecting to the molecular scale by performing all-atom molecular dynamics simulations guided by the coarse-grained results. The coarse-grained modeling can then be extended and refined, with the entire loop repeated iteratively if necessary.

  18. Multiscale methods for computational RNA enzymology

    Science.gov (United States)

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  19. Customizing computational methods for visual analytics with big data.

    Science.gov (United States)

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  20. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    Science.gov (United States)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  1. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  2. Advanced Methods and Applications in Computational Intelligence

    CERN Document Server

    Nikodem, Jan; Jacak, Witold; Chaczko, Zenon; ACASE 2012

    2014-01-01

    This book offers an excellent presentation of intelligent engineering and informatics foundations for researchers in this field as well as many examples with industrial application. It contains extended versions of selected papers presented at the inaugural ACASE 2012 Conference dedicated to the Applications of Systems Engineering. This conference was held from the 6th to the 8th of February 2012, at the University of Technology, Sydney, Australia, organized by the University of Technology, Sydney (Australia), Wroclaw University of Technology (Poland) and the University of Applied Sciences in Hagenberg (Austria). The  book is organized into three main parts. Part I contains papers devoted to the heuristic approaches that are applicable in situations where the problem cannot be solved by exact methods, due to various characteristics or  dimensionality problems. Part II covers essential issues of the network management, presents intelligent models of the next generation of networks and distributed systems ...

  3. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  4. Investidura com a doctor honoris causa de l'Excm. Sr. Hugh Scott Fogler

    OpenAIRE

    Diversos autors

    2016-01-01

    Investidura com a doctor honoris causa del senyor Hugh Scott Fogler. Sessió acadèmica extraordinària, 15 d'abril de 2016. Elogi del candidat a càrrec del doctor Azael Fabregat Llagostera. Discurs d’investidura pronunciat pel senyor Hugh Scott Fogler. Paraules de benvinguda pronunciades pel doctor Josep Anton Ferré Vidal, Rector Magfc. de la Universitat.

  5. STS-40 Payload Specialist Millie Hughes-Fulford trains in JSC's SLS mockup

    Science.gov (United States)

    1987-01-01

    STS-40 Payload Specialist Millie Hughes-Fulford conducts Spacelab Life Sciences 1 (SLS-1) Experiment No. 198, Pulmonary Function During Weightlessness, in JSC's Life Sciences Project Division (LSPD) SLS mockup located in the Bioengineering and Test Support Facility Bldg 36. Hughes-Fulford monitors instruments and settings on Rack 8's panels. Behind her in the center aisle are the body mass measurement device (foreground) and the stowed bicycle ergometer.

  6. Computational methods for internal flows with emphasis on turbomachinery

    Science.gov (United States)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  7. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  8. METHODOLOGICAL NOTES: Computer viruses and methods of combatting them

    Science.gov (United States)

    Landsberg, G. L.

    1991-02-01

    This article examines the current virus situation for personal computers and time-sharing computers. Basic methods of combatting viruses are presented. Specific recommendations are given to eliminate the most widespread viruses. A short description is given of a universal antiviral system, PHENIX, which has been developed.

  9. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...... with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz...

  10. Computation of Load Flow Problems with Homotopy Methods

    Institute of Scientific and Technical Information of China (English)

    陈玉荣; 蔡大用

    2001-01-01

    Load flow computations are the basis for voltage security assessments in power systems. All of theflow equation solutions must be computed to explore the mechanisms of voltage instability and voltagecollapse. Conventional algorithms, such as Newton's methods and its variations, are not very desirablebecause they can not be easily used to find all of the solutions. This paper investigates homotopy methodswhich can be used for numerically computing the set of all isolated solutions of multivariate polynomial systemsresulting from load flow computations. The results significantly reduce the number of paths being followed.``

  11. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  12. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  13. Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2011-01-01

    Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it is necessary to allocate bandwidth to access them at the same time. This paper proposes an optimal resource allocation method for cloud computing environments. First, this paper develops a resource allocation model of cloud computing environments, assuming both processing ability and bandwidth are allocated simultaneously to each service request and rented out on an hourly basis. The allocated resources are dedicated to each service request. Next, this paper proposes an optimal joint multiple resource allocation method, based on the above resource allocation model. It is demonstrated by simulation evaluation that the p...

  14. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  15. Using boundary methods to compute the Casimir energy

    CERN Document Server

    Lombardo, F C; Villar, P I

    2010-01-01

    We discuss new approaches to compute numerically the Casimir interaction energy for waveguides of arbitrary section, based on the boundary methods traditionally used to compute eigenvalues of the 2D Helmholtz equation. These methods are combined with the Cauchy's theorem in order to perform the sum over modes. As an illustration, we describe a point-matching technique to compute the vacuum energy for waveguides containing media with different permittivities. We present explicit numerical evaluations for perfect conducting surfaces in the case of concentric corrugated cylinders and a circular cylinder inside an elliptic one.

  16. An approximate 3D computational method for real-time computation of induction logging responses

    NARCIS (Netherlands)

    Bensdorp, S.; Petersen, S.A.; Van den Berg, P.M.; Fokkema, J.T.

    2014-01-01

    Over many years, induction logging systems have been used to create well formation logs. The major drawback for the utilization of these tools is the long simulation time for a single forward computation. We proposed an efficient computational method based on a contrast-type of integral-equation for

  17. Towards Qualitative Computer Science Education: Engendering Effective Teaching Methods

    Directory of Open Access Journals (Sweden)

    Basirat A. Adenowo

    2013-09-01

    Full Text Available An investigation into the teaching method(s that can effectively yield qualitative computer science education in Basic Schools becomes necessary due to the Nigerian government policy on education. The government’s policy stipulates that every graduate of Basic Schools or UBE (Universal Basic education should be computer literate. This policy intends to ensure her citizens are ICT (Information and Communication Technology compliant. The foregoing thus necessitatesthe production of highly qualified manpower―grounded in computer knowledge―to implement the computer science education strand of the UBE curriculum. Accordingly, this research investigates the opinion of computer teacher-trainees on the teaching methods used while on training. Some of the teacher-trainees―that taught computer study while on teaching practice―were systematically sampled using “Purposive” sampling technique. The results show consensus in male and female teacher-trainees’ views; both gender agreed that all the teaching methods used, while on training, will engender effective teaching of computer study. On the whole, the mean performance ratings of male teacher-trainees were found to be higher than that of females. However, this is not in accord with the target set by Universal Basic Education Commission which intends to eliminate gender disparity in the UBE programme. The results thussuggestthe need for further investigation using larger sample.

  18. THE ONE-DIMENSIONAL HUGHES MODEL FOR PEDESTRIAN FLOW: RIEMANN-TYPE SOLUTIONS

    Institute of Scientific and Technical Information of China (English)

    Debora Amadori; M. Di Francesco

    2012-01-01

    This paper deals with a coupled system consisting of a scalar conservation law and an eikonal equation,called the Hughes model.Introduced in [24],this model attempts to describe the motion of pedestrians in a densely crowded region,in which they are seen as a 'thinking' (continuum) fluid.The main mathematical difficulty is the discontinuous gradient of the solution to the eikonal equation appearing in the flux of the conservation law.On a one dimensional interval with zero Dirichlet conditions (the two edges of the interval are interpreted as 'targets'),the model can be decoupled in a way to consider two classical conservation laws on two sub-domains separated by a turning point at which the pedestrians change their direction.We shall consider solutions with a possible jump discontinuity around the turning point.For simplicity,we shall assume they are locally constant on both sides of the discontinuity.We provide a detailed description of the localin-time behavior of the solution in terms of a 'global' qualitative property of the pedestrian density (that we call 'relative evacuation rate'),which can be interpreted as the attitude of the pedestrians to direct towards the left or the right target.We complement our result with explicitly computable examples.

  19. Computation of electron energy loss spectra by an iterative method

    Energy Technology Data Exchange (ETDEWEB)

    Koval, Peter [Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, E-20018 San Sebastián (Spain); Centro de Física de Materiales CFM-MPC, Centro Mixto CSIC-UPV/EHU, Paseo Manuel de Lardizabal 5, E-20018 San Sebastián (Spain); Ljungberg, Mathias Per [Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, E-20018 San Sebastián (Spain); Foerster, Dietrich [LOMA, Université de Bordeaux 1, 351 Cours de la Liberation, 33405 Talence (France); Sánchez-Portal, Daniel [Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, E-20018 San Sebastián (Spain); Centro de Física de Materiales CFM-MPC, Centro Mixto CSIC-UPV/EHU, Paseo Manuel de Lardizabal 5, E-20018 San Sebastián (Spain)

    2015-07-01

    A method is presented to compute the dielectric function for extended systems using linear response time-dependent density functional theory. Localized basis functions with finite support are used to expand both eigenstates and response functions. The electron-energy loss function is directly obtained by an iterative Krylov-subspace method. We apply our method to graphene and silicon and compare it to plane-wave based approaches. Finally, we compute electron-energy loss spectrum of C{sub 60} crystal to demonstrate the merits of the method for molecular crystals, where it will be most competitive.

  20. Pulmonary cytotoxicity of secondary metabolites of Stachybotrys chartarum (Ehrenb.) Hughes.

    Science.gov (United States)

    Pieckova, Elena; Hurbankova, Marta; Cerna, Silvia; Pivovarova, Zuzana; Kovacikova, Zuzana

    2006-01-01

    Damp dwellings represent suitable conditions for extended indoor moulds. A cellulolytic micromycete Stachybotrys chartarum (Ehrenb.) Hughes is considered to be a tertiary colonizer of surfaces in affected buildings. Known adverse health effects of S. chartarum result from its toxins--trichothecenes or atranones, as well as spirolactams. Mechanism of their potential pathological effects on the respiratory tract has not yet been sufficiently clarified. The cytotoxic effects of complex chloroform-extractable endo- (in biomass) and exometabolites (in cultivation medium) of an indoor S. chartarum isolate of an atranone chemotype, grown on a liquid medium with yeast extract and sucrose at 25 degrees C for 14 d, on lung tissue were evaluated in the 3-day experiment. For the purpose, 4 mg of toxicants were intratracheally instilled in 200 g Wistar male rats. A trichothecene mycotoxin diacetoxyscirpenol was used as the positive control. Bronchoalveolar lavage (BAL) parameters--viability and phagocytic activity of alveolar macrophages (AM), activity of lactate dehydrogenase, acid phosphatase and cathepsin D in cell-free BAL fluid (BALF), as well as in BAL cells, were measured. Acute exposure to the metabolites caused statistically significant changes, indicating lung tissue injury in the experimental animals. Decreased AM viability and increased activity of lysosomal enzyme cathepsin D in BAL cells after fungal exometabolite exposure were the most impressive. As toxic principles were found predominantly in the growth medium, toxins were more likely responsible for lung cell damage than e.g. fungal cell wall components. S. chartarum toxic metabolites can contribute to the ill health of occupants of mouldy building after inhalation of contaminated aerosol.

  1. Research data collection methods: from paper to tablet computers.

    Science.gov (United States)

    Wilcox, Adam B; Gallagher, Kathleen D; Boden-Albala, Bernadette; Bakken, Suzanne R

    2012-07-01

    Primary data collection is a critical activity in clinical research. Even with significant advances in technical capabilities, clear benefits of use, and even user preferences for using electronic systems for collecting primary data, paper-based data collection is still common in clinical research settings. However, with recent developments in both clinical research and tablet computer technology, the comparative advantages and disadvantages of data collection methods should be determined. To describe case studies using multiple methods of data collection, including next-generation tablets, and consider their various advantages and disadvantages. We reviewed 5 modern case studies using primary data collection, using methods ranging from paper to next-generation tablet computers. We performed semistructured telephone interviews with each project, which considered factors relevant to data collection. We address specific issues with workflow, implementation and security for these different methods, and identify differences in implementation that led to different technology considerations for each case study. There remain multiple methods for primary data collection, each with its own strengths and weaknesses. Two recent methods are electronic health record templates and next-generation tablet computers. Electronic health record templates can link data directly to medical records, but are notably difficult to use. Current tablet computers are substantially different from previous technologies with regard to user familiarity and software cost. The use of cloud-based storage for tablet computers, however, creates a specific challenge for clinical research that must be considered but can be overcome.

  2. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  3. Methods of Computer Algebra and the Many Bodies Algebra

    Science.gov (United States)

    Grebenikov, E. A.; Kozak-Skoworodkina, D.; Yakubiak, M.

    2001-07-01

    The monograph concerns with qualitative methoids in n>3 bodies restricted problems by methods of computer algebra. The book consists of 4 chapters. The first two chapters contain the theory of homographic solutions in the many bodies problem. Other two chapters concern with Lyapunov stability of new solutions of differential equations based on KAM -theory. The computer method of the Birkhoff's normalisation method of the hamiltonians for the restricted 4, 5, 6, and 7 bodies is presented in detail. The book is designed for scientific researchers, doctorants, and students of the Physical-Mathematical departments. It could be used as well in University courses of qualitative theory of differential equations.

  4. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  5. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  6. Proposed congestion control method for cloud computing environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2012-01-01

    As cloud computing services rapidly expand their customer base, it has become important to share cloud resources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. I...

  7. Data Analysis through a Generalized Interactive Computer Animation Method (DATICAM)

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, J.N.; Schweider, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process.

  8. Data analysis through interactive computer animation method (DATICAM)

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process.

  9. Computing the crystal growth rate by the interface pinning method

    DEFF Research Database (Denmark)

    Pedersen, Ulf Rørbæk; Hummel, Felix; Dellago, Christoph

    2015-01-01

    -phase configurations are stabilized by adding a spring-like bias field coupling to an order-parameter that discriminates between the two phases. Crystal growth is a Smoluchowski process and the crystal growth rate can, therefore, be computed from the terminal exponential relaxation of the order parameter. The approach...... from first principles. A generalized version of the method may be used for computing the rates of crystal nucleation or other rare events....

  10. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  11. Multi-pattern Matching Methods Based on Numerical Computation

    Directory of Open Access Journals (Sweden)

    Lu Jun

    2013-01-01

    Full Text Available Multi-pattern matching methods based on numerical computation are advanced in this paper. Firstly it advanced the multiple patterns matching algorithm based on added information. In the process of accumulating of information, the select method of byte-accumulate operation will affect the collision odds , which means that the methods or bytes involved in the different matching steps should have greater differences as much as possible. In addition, it can use balanced binary tree to manage index to reduce the average searching times, and use the characteristics of a given pattern set by setting the collision field to eliminate collision further. In order to reduce the collision odds in the initial step, the information splicing method is advanced, which has greater value space than added information method, thus greatly reducing the initial collision odds. Multiple patterns matching methods based on numerical computation fits for large multi-pattern matching.

  12. A Brief Review of Computational Gene Prediction Methods

    Institute of Scientific and Technical Information of China (English)

    Zhuo Wang; Yazhu Chen; Yixue Li

    2004-01-01

    With the development of genome sequencing for many organisms, more and more raw sequences need to be annotated. Gene prediction by computational methods for finding the location of protein coding regions is one of the essential issues in bioinformatics. Two classes of methods are generally adopted: similarity based searches and ab initio prediction. Here, we review the development of gene prediction methods, summarize the measures for evaluating predictor quality, highlight open problems in this area, and discuss future research directions.

  13. The spectral-element method, Beowulf computing, and global seismology.

    Science.gov (United States)

    Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

    2002-11-29

    The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.

  14. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [University of Cyprus, Department of Physics, P.O. Box 20537, Nicosia (Cyprus); The Cyprus Institute, Computation-based Science and Technology Research Center, Nicosia (Cyprus); Constantinou, Martha; Hadjiyiannakou, Kyriakos [University of Cyprus, Department of Physics, P.O. Box 20537, Nicosia (Cyprus); Dinter, Simon; Drach, Vincent; Jansen, Karl [NIC, DESY Zeuthen, Zeuthen (Germany); Renner, Dru B. [NIC, DESY Zeuthen, Zeuthen (Germany); Jefferson Lab., Newport News (United States); Collaboration: ETM Collaboration

    2014-01-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile than the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume, and we found a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements. (orig.)

  15. Curriculum modules, software laboratories, and an inexpensive hardware platform for teaching computational methods to undergraduate computer science students

    Science.gov (United States)

    Peck, Charles Franklin

    Computational methods are increasingly important to 21st century research and education; bioinformatics and climate change are just two examples of this trend. In this context computer scientists play an important role, facilitating the development and use of the methods and tools used to support computationally-based approaches. The undergraduate curriculum in computer science is one place where computational tools and methods can be introduced to facilitate the development of appropriately prepared computer scientists. To facilitate the evolution of the pedagogy, this dissertation identifies, develops, and organizes curriculum materials, software laboratories, and the reference design for an inexpensive portable cluster computer, all of which are specifically designed to support the teaching of computational methods to undergraduate computer science students. Keywords. computational science, computational thinking, computer science, undergraduate curriculum.

  16. [Models and computation methods of EEG forward problem].

    Science.gov (United States)

    Zhang, Yinghcun; Zou, Ling; Zhu, Shanan

    2004-04-01

    The research of EEG is of grat significance and clinical importance in studying the cognitive function and neural activity of the brain. There are two key problems in the field of EEG, EEG forward problem and EEG inverse problem. EEG forward problem which aims to get the distribution of the scalp potential due to the known current distribution in the brain is the basis of the EEG inverse problem. Generally, EEG inverse problem depends on the accuracy and efficiency of the computational method of EEG forward problem. This paper gives a review of the head model and corresponding computational method about EEG forward problem studied in recent years.

  17. Numerical methods for solving ODEs on the infinity computer

    Science.gov (United States)

    Mazzia, F.; Sergeyev, Ya. D.; Iavernaro, F.; Amodio, P.; Mukhametzhanov, M. S.

    2016-10-01

    New algorithms for the numerical solution of Ordinary Differential Equations (ODEs) with initial conditions are proposed. They are designed for working on a new kind of a supercomputer - the Infinity Computer - that is able to deal numerically with finite, infinite and infinitesimal numbers. Due to this fact, the Infinity Computer allows one to calculate the exact derivatives of functions using infinitesimal values of the stepsize. As a consequence, the new methods are able to work with the exact values of the derivatives, instead of their approximations. Within this context, variants of one-step multi-point methods closely related to the classical Taylor formulae and to the Obrechkoff methods are considered. To get numerical evidence of the theoretical results, test problems are solved by means of the new methods and the results compared with the performance of classical methods.

  18. Costs evaluation methodic of energy efficient computer network reengineering

    Directory of Open Access Journals (Sweden)

    S.A. Nesterenko

    2016-09-01

    Full Text Available A key direction of modern computer networks reengineering is their transfer to a new energy-saving technology IEEE 802.3az. To make a reasoned decision about the transition to the new technology is needed a technique that allows network engineers to answer the question about the economic feasibility of a network upgrade. Aim: The aim of this research is development of methodic for calculating the cost-effectiveness of energy-efficient computer network reengineering. Materials and Methods: The methodic uses analytical models for calculating power consumption of a computer network port operating in IEEE 802.3 standard and energy-efficient mode of IEEE 802.3az standard. For frame transmission time calculation in the communication channel used the queuing model. To determine the values of the network operation parameters proposed to use multiagent network monitoring method. Results: The methodic allows calculating the economic impact of a computer network transfer to energy-saving technology IEEE 802.3az. To determine the network performance parameters proposed to use network SNMP monitoring systems based on RMON MIB agents.

  19. Subthreshold and suprathreshold vibrational resonance in the FitzHugh-Nagumo neuron model

    Science.gov (United States)

    Zhu, Jinjie; Kong, Chen; Liu, Xianbin

    2016-09-01

    We study the subthreshold and suprathreshold vibrational resonance in the FitzHugh-Nagumo neuron model. For the subthreshold situation, two cases where the stationary states are equilibrium point and limit cycle are considered, where different natures of vibrational resonance are observed via theoretical and numerical methods. Especially when the frequency of the high-frequency driving force is near the so-called canard-resonance frequency, the firing rate can be significantly enhanced at the presence of noise. For the suprathreshold situation, we show that the local maxima of the response amplitude are located at the transition boundaries of different phase-locking patterns. The minimal required forcing amplitudes of high-frequency signal of the firing onset are just multiples of the spiking frequency. Furthermore, phase portraits and time series show that the presence of the global maxima of the response results from not only the suprathreshold but also the subthreshold phase-locking modes. In spite of the distinct characteristics for two stationary states on subthreshold oscillation, the suprathreshold vibrational resonance showed no qualitative difference between the two cases.

  20. Case of Fitz-Hugh-Curtis syndrome in male without presentation of sexually transmitted disease.

    Science.gov (United States)

    Yi, Haram; Shim, Chan Sup; Kim, Gyu Won; Kim, Jung Seok; Choi, In Zoo

    2015-11-16

    Fitz-Hugh-Curtis syndrome is a type of perihepatitis that causes liver capsular infection without infecting the hepatic parenchyma or pelvis. Fitz-Hugh-Curtis syndrome is known to occur commonly in women of childbearing age who do not use oral contraceptives and have sexual partners older than 25 years of age. However, the syndrome has been reported to occur rarely in males. The clinical symptoms are right upper quadrant pain and tenderness, and pleuritic right sided chest pain. The clinical presentation is similar in male and female. We experienced a case of Fitz-Hugh-Curtis syndrome in a 60-year-old man with the chief complaint of right upper quadrant abdominal pain. Despite a previous history of gonorrhea, we have also described our experiences of improved symptoms and recovery with allopathic medicines and have thereby reported the present case with a literature review.

  1. Real time optical edge enhancement using a Hughes liquid crystal light valve

    Science.gov (United States)

    Chao, Tien-Hsin

    1989-01-01

    The discovery of an edge enhancement effect in using a Hughes CdS liquid crystal light valve (LCLV) is reported. An edge-enhanced version of the input writing image can be directly obtained by operating the LCLV at a lower bias frequency and bias voltage. Experimental conditions in which this edge enhancement effect can be optimized are described. Experimental results show that the SNR of the readout image using this technique is superior to that obtained using high-pass filtering. The repeatability of this effect is confirmed by obtaining an edge enhancement result using two different Hughes LCLVs. The applicability of this effect to improve discrimination capability in optical pattern recognition is addressed. The results show that the Hughes LCLV can be used in both continuous tone and edge-enhancing modes by simply adjusting its bias conditions.

  2. On the Hughes' model for pedestrian flow: The one-dimensional case

    KAUST Repository

    Di Francesco, Marco

    2011-02-01

    In this paper we investigate the mathematical theory of Hughes\\' model for the flow of pedestrians (cf. Hughes (2002) [17]), consisting of a non-linear conservation law for the density of pedestrians coupled with an eikonal equation for a potential modelling the common sense of the task. For such an approximated system we prove existence and uniqueness of entropy solutions (in one space dimension) in the sense of Kružkov (1970) [22], in which the boundary conditions are posed following the approach of Bardos et al. (1979) [7]. We use BV estimates on the density ρ and stability estimates on the potential Π in order to prove uniqueness. Furthermore, we analyze the evolution of characteristics for the original Hughes\\' model in one space dimension and study the behavior of simple solutions, in order to reproduce interesting phenomena related to the formation of shocks and rarefaction waves. The characteristic calculus is supported by numerical simulations. © 2010 Elsevier Inc.

  3. Real time optical edge enhancement using a Hughes liquid crystal light valve

    Science.gov (United States)

    Chao, Tien-Hsin

    1989-01-01

    The discovery of an edge enhancement effect in using a Hughes CdS liquid crystal light valve (LCLV) is reported. An edge-enhanced version of the input writing image can be directly obtained by operating the LCLV at a lower bias frequency and bias voltage. Experimental conditions in which this edge enhancement effect can be optimized are described. Experimental results show that the SNR of the readout image using this technique is superior to that obtained using high-pass filtering. The repeatability of this effect is confirmed by obtaining an edge enhancement result using two different Hughes LCLVs. The applicability of this effect to improve discrimination capability in optical pattern recognition is addressed. The results show that the Hughes LCLV can be used in both continuous tone and edge-enhancing modes by simply adjusting its bias conditions.

  4. Computational Methods for Dynamic Stability and Control Derivatives

    Science.gov (United States)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  5. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  6. POINCARE-LIGHTHILL-KUO METHOD AND SYMBOLIC COMPUTATION

    Institute of Scientific and Technical Information of China (English)

    戴世强

    2001-01-01

    This paper elucidates the effectiveness of combining the Poincare-Lighthill-Kuo method( PLK method, for short) and symbolic computation. Firstly, the idea and history of the PLK method are briefly introduced. Then, the difficulty of intermediate expression swell, often encountered in symbolic computation, is outlined. For overcoming the difficulty, a semi-inverse algorithm was proposed by the author, with which the lengthy parts of intermediate expressions are first frozen in the form of symbols till the final stage of seeking perturbation solutions. To discuss the applications of the above algorithm, the related work of the author and his research group on nonlinear oscillations and waves is concisely reviewed. The computer-extended perturbation solution of the Duffing equation shows that the csymptotic solution obtained with the PLK method possesses the convergence radius of 1 and thus the range of validity of the solution is considerably enlarged. The studies on internal solitary waves in stratified fluid and on the head-on collision between two solitary waves in a hyperelastic rod indicate that by means of the presented methods, very complicated manipulation, unconceivable in hand calculation, can be conducted and thus result in higher-order evolution equations and asymptotic solutions. The examples illustrate that the algorithm helps to realize the symbolic computation on micro-commputers. Finally,it is concluded that with the aid of symbolic computation, the vitality of the PLK method is greatly strengthened and at least for the solutions to conservative systems of oscillations and waves, it is a powerful tool.

  7. Fast and accurate method for computing ATC with voltage stability

    CERN Document Server

    Eidiani, M; Vahedi, E

    2002-01-01

    Order 889 mandated each control area to computer ATC (Available Transfer Capability) and post them on a communication system called the Open Access Same-time Information System (OASIS). Approaches of computing ATC can be divided into the following groups: Static and Dynamic methods. This paper presents a fast method for ATC calculations with voltage stability termination criteria. We use estimation of the determinant of Jacobian matrix for assessment of voltage stability. This method is compared with these methods: different between energy in SEP (Stable Equilibrium Point) and UEP (Unstable Equilibrium Point), ts index of Dr.Chiang and continuation power flow. The idea are demonstrated on 2, 3, 7 (CIGRE), 10, 30 (IEEE) and 145 bus (Iowa State University).

  8. New Methods for Design and Computation of Freeform Optics

    Science.gov (United States)

    2015-07-09

    strategy for constructing weak solutions to nonlinear partial differential equations arising in design problems involving freeform optical surfaces[10...working in related areas of fully nonlinear partial differential equations , optics, and computational methods for optimization under extremely large...as a partial differential equation (PDE) of second order with nonstandard boundary conditions. The solution to this PDE problem is a scalar function

  9. pyro: Python-based tutorial for computational methods for hydrodynamics

    Science.gov (United States)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  10. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  11. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or c

  12. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  13. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  14. Symposium Festschrift Hughes (Vernon W) to Celebrate his 70th birthday

    CERN Document Server

    1992-01-01

    The contents of this book are derived from a celebration of the 70th birthday of Vernon W Hughes. Professor Hughes' career has touched on several areas in modern physics ranging from precision measurements of the fundamental properties of atoms to measurements of spin structure functions of the proton via deep inelastic muon scattering at the world's highest energy fixed target machines. This observance of his 70th birthday brings together experimental and theoretical physicists who are leaders of the many fields in which he has made contributions.

  15. 77 FR 1975 - Union Pacific Railroad Company-Discontinuance of Service Exemption-in Pittsburg, Hughes, and...

    Science.gov (United States)

    2012-01-12

    ..., Hughes, and Seminole Counties, Okla. (the line). The line traverses United States Postal Service Zip... discontinue service over a portion of a line of railroad known as the Shawnee Branch Line, between milepost... in Hughes County, which makes the line 0.19 miles longer than the terminal mileposts would...

  16. Rivers and Hughes's Construction of Black Culture in White America——Textual Analysis of "The Negro Speaks of RAvers"

    Institute of Scientific and Technical Information of China (English)

    曾慧

    2009-01-01

    Langston Hughes's central purpose in writing is "to explain and illuminate the Negro condition in America". By means of textual analysis, this thesis is to discover how the images of rivers in "The Negro Speaks of Rivers" construct the black culture, to find Hughes's identity in America.

  17. CT Diagnosis of Fitz-Hugh and Curtis Syndrome: Value of the Arterial Phase Scan

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Seung Ho; Kim, Myeong Jin; Lim, Joon Seok; Kim, Joo Hee; Kim, Ki Whang [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2007-02-15

    We wanted to evaluate the role of the arterial phase (AP) together with the portal venous phase (PP) scans in the diagnosis of Fitz-Hugh-Curtis syndrome (FHCS) with using computed tomography (CT). Twenty-five patients with FHCS and 25 women presenting with non-specifically diagnosed acute abdominal pain and who underwent biphasic CT examinations were evaluated. The AP scan included the upper abdomen, and the PP scan included the whole abdomen. Two radiologists blindly and retrospectively reviewed the PP scans first and then they reviewed the AP plus PP scans. The diagnostic accuracy of FHCS on each image set was compared for each reader by analyzing the area under the receiver operating characteristic curve (Az). Weighted kappa (wk) statistics were used to measure the interobserver agreement for the presence of CT signs of the pelvic inflammatory disease (PID) on the PP images and FHCS as the diagnosis based on the increased perihepatic enhancement on both sets of images. The individual diagnostic accuracy of FHCS was higher on the biphasic images (Az = 0.905 and 0.942 for reader 1 and 2, respectively) than on the PP images alone (Az = 0.806 and 0.706, respectively). The interobserver agreement for the presence of PID on the PP images was moderate (wk = 0.530). The interobserver agreement for FHCS as the diagnosis was moderate on only the PP images (wk = 0.413), but it was substantial on the biphasic images (wk 0.719). Inclusion of the AP scan is helpful to depict the increased perihepatic enhancement, and it improves the diagnostic accuracy of FHCS on CT.

  18. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  19. A granular computing method for nonlinear convection-diffusion equation

    Directory of Open Access Journals (Sweden)

    Tian Ya Lan

    2016-01-01

    Full Text Available This paper introduces a method of solving nonlinear convection-diffusion equation (NCDE, based on the combination of granular computing (GrC and characteristics finite element method (CFEM. The key idea of the proposed method (denoted as GrC-CFEM is to reconstruct the solution from coarse-grained layer to fine-grained layer. It first gets the nonlinear solution on the coarse-grained layer, and then the function (Taylor expansion is applied to linearize the NCDE on the fine-grained layer. Switch to the fine-grained layer, the linear solution is directly derived from the nonlinear solution. The full nonlinear problem is solved only on the coarse-grained layer. Numerical experiments show that the GrC-CFEM can accelerate the convergence and improve the computational efficiency without sacrificing the accuracy.

  20. A Method for Weight Multiplicity Computation Based on Berezin Quantization

    Directory of Open Access Journals (Sweden)

    David Bar-Moshe

    2009-09-01

    Full Text Available Let G be a compact semisimple Lie group and T be a maximal torus of G. We describe a method for weight multiplicity computation in unitary irreducible representations of G, based on the theory of Berezin quantization on G/T. Let Γ_{hol}(L^λ be the reproducing kernel Hilbert space of holomorphic sections of the homogeneous line bundle L^λ over G/T associated with the highest weight λ of the irreducible representation π_λ of G. The multiplicity of a weight m in π_λ is computed from functional analytical structure of the Berezin symbol of the projector in Γ_{hol}(L^λ onto subspace of weight m. We describe a method of the construction of this symbol and the evaluation of the weight multiplicity as a rank of a Hermitian form. The application of this method is described in a number of examples.

  1. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Science.gov (United States)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  2. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  3. Computational methods to dissect cis-regulatory transcriptional networks

    Indian Academy of Sciences (India)

    Vibha Rani

    2007-12-01

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for deciphering the regulation of metazoan genes as well as interpreting the results of high-throughput screening. The integration of computer science with biology has expedited molecular modelling and processing of large-scale data inputs such as microarrays, analysis of genomes, transcriptomes and proteomes. Many bioinformaticians have developed various algorithms for predicting transcriptional regulatory mechanisms from the sequence, gene expression and interaction data. This review contains compiled information of various computational methods adopted to dissect gene expression pathways.

  4. Coupled computation method of physics fields in aluminum reduction cells

    Institute of Scientific and Technical Information of China (English)

    周乃君; 梅炽; 姜昌伟; 周萍; 李劼

    2003-01-01

    Considering importance of study on physics fields and computer simulation for aluminum reduction cells so as to optimize design on aluminum reduction cells and develop new type of cells, based on analyzing coupled relation of physics fields in aluminum reduction cells, the mathematics and physics models were established and a coupled computation method on distribution of electric current and magnetic field, temperature profile and metal velocity in cells was developed. The computational results in 82kA prebaked cells agree well with the measured results, and the errors of maxium value calculated for three main physics property fields are less than 10%, which proves that the model and arithmetic are available. So the software developed can be not only applied to optimization design on traditional aluminum reduction cells, but also to establishing better technology basis to develop new drained aluminum reduction cells.

  5. Reducing Total Power Consumption Method in Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2012-01-01

    The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.

  6. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  7. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  8. Computing eigenvalues occuring in continuation methods with the Jacobi-Davidson QZ method

    NARCIS (Netherlands)

    Dorsselaer, J.L.M. van

    1997-01-01

    Continuation methods are a well-known technique for computing several stationary solutions of problems involving one or more physical parameters. In order to determine whether a stationary solution is stable, and to detect the bifurcation points of the problem, one has to compute the rightmost

  9. THE DOMAIN DECOMPOSITION TECHNIQUES FOR THE FINITE ELEMENT PROBABILITY COMPUTATIONAL METHODS

    Institute of Scientific and Technical Information of China (English)

    LIU Xiaoqi

    2000-01-01

    In this paper, we shall study the domain decomposition techniques for the finite element probability computational methods. These techniques provide a theoretical basis for parallel probability computational methods.

  10. Methods for library-scale computational protein design.

    Science.gov (United States)

    Johnson, Lucas B; Huber, Thaddaus R; Snow, Christopher D

    2014-01-01

    Faced with a protein engineering challenge, a contemporary researcher can choose from myriad design strategies. Library-scale computational protein design (LCPD) is a hybrid method suitable for the engineering of improved protein variants with diverse sequences. This chapter discusses the background and merits of several practical LCPD techniques. First, LCPD methods suitable for delocalized protein design are presented in the context of example design calculations for cellobiohydrolase II. Second, localized design methods are discussed in the context of an example design calculation intended to shift the substrate specificity of a ketol-acid reductoisomerase Rossmann domain from NADPH to NADH.

  11. Practical methods to improve the development of computational software

    Energy Technology Data Exchange (ETDEWEB)

    Osborne, A. G.; Harding, D. W.; Deinert, M. R. [Department of Mechanical Engineering, University of Texas, Austin (United States)

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  12. Applications of meshless methods for damage computations with finite strains

    Science.gov (United States)

    Pan, Xiaofei; Yuan, Huang

    2009-06-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

  13. Computational methods to determine the structure of hydrogen storage materials

    Science.gov (United States)

    Mueller, Tim

    2009-03-01

    To understand the mechanisms and thermodynamics of material-based hydrogen storage, it is important to know the structure of the material and the positions of the hydrogen atoms within the material. Because hydrogen can be difficult to resolve experimentally computational research has proven to be a valuable tool to address these problems. We discuss different computational methods for identifying the structure of hydrogen materials and the positions of hydrogen atoms, and we illustrate the methods with specific examples. Through the use of ab-initio molecular dynamics, we identify molecular hydrogen binding sites in the metal-organic framework commonly known as MOF-5 [1]. We present a method to identify the positions of atomic hydrogen in imide structures using a novel type of effective Hamiltonian. We apply this new method to lithium imide (Li2NH), a potentially important hydrogen storage material, and demonstrate that it predicts a new ground state structure [2]. We also present the results of a recent computational study of the room-temperature structure of lithium imide in which we suggest a new structure that reconciles the differences between previous experimental and theoretical studies. [4pt] [1] T. Mueller and G. Ceder, Journal of Physical Chemistry B 109, 17974 (2005). [0pt] [2] T. Mueller and G. Ceder, Physical Review B 74 (2006).

  14. Yeast ancestral genome reconstructions: the possibilities of computational methods II.

    Science.gov (United States)

    Chauve, Cedric; Gavranovic, Haris; Ouangraoua, Aida; Tannier, Eric

    2010-09-01

    Since the availability of assembled eukaryotic genomes, the first one being a budding yeast, many computational methods for the reconstruction of ancestral karyotypes and gene orders have been developed. The difficulty has always been to assess their reliability, since we often miss a good knowledge of the true ancestral genomes to compare their results to, as well as a good knowledge of the evolutionary mechanisms to test them on realistic simulated data. In this study, we propose some measures of reliability of several kinds of methods, and apply them to infer and analyse the architectures of two ancestral yeast genomes, based on the sequence of seven assembled extant ones. The pre-duplication common ancestor of S. cerevisiae and C. glabrata has been inferred manually by Gordon et al. (Plos Genet. 2009). We show why, in this case, a good convergence of the methods is explained by some properties of the data, and why results are reliable. In another study, Jean et al. (J. Comput Biol. 2009) proposed an ancestral architecture of the last common ancestor of S. kluyveri, K. thermotolerans, K. lactis, A. gossypii, and Z. rouxii inferred by a computational method. In this case, we show that the dataset does not seem to contain enough information to infer a reliable architecture, and we construct a higher resolution dataset which gives a good reliability on a new ancestral configuration.

  15. Computed Optical Interferometric Imaging: Methods, Achievements, and Challenges.

    Science.gov (United States)

    South, Fredrick A; Liu, Yuan-Zhi; Carney, P Scott; Boppart, Stephen A

    2016-01-01

    Three-dimensional high-resolution optical imaging systems are generally restricted by the trade-off between resolution and depth-of-field as well as imperfections in the imaging system or sample. Computed optical interferometric imaging is able to overcome these longstanding limitations using methods such as interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO) which manipulate the complex interferometric data. These techniques correct for limited depth-of-field and optical aberrations without the need for additional hardware. This paper aims to outline these computational methods, making them readily available to the research community. Achievements of the techniques will be highlighted, along with past and present challenges in implementing the techniques. Challenges such as phase instability and determination of the appropriate aberration correction have been largely overcome so that imaging of living tissues using ISAM and CAO is now possible. Computed imaging in optics is becoming a mature technology poised to make a significant impact in medicine and biology.

  16. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    Science.gov (United States)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  17. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  18. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  19. COMPUTATIONAL FLOW RATE FEEDBACK AND CONTROL METHOD IN HYDRAULIC ELEVATORS

    Institute of Scientific and Technical Information of China (English)

    Xu Bing; Ma Jien; Lin Jianjie

    2005-01-01

    The computational flow rate feedback and control method, which can be used in proportional valve controlled hydraulic elevators, is discussed and analyzed. In a hydraulic elevator with this method, microprocessor receives pressure information from the pressure transducers and computes the flow rate through the proportional valve based on pressure-flow conversion real time algorithm. This hydraulic elevator is of lower cost and energy consumption than the conventional closed loop control hydraulic elevator whose flow rate is measured by a flow meter. Experiments are carried out on a test rig which could simulate the load of hydraulic elevator. According to the experiment results, the means to modify the pressure-flow conversion algorithm are pointed out.

  20. A new computational method for reacting hypersonic flows

    Science.gov (United States)

    Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.

    2017-07-01

    Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.

  1. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  2. Improved Method of Blind Speech Separation with Low Computational Complexity

    Directory of Open Access Journals (Sweden)

    Kazunobu Kondo

    2011-01-01

    a frame-wise spectral soft mask method based on an interchannel power ratio of tentative separated signals in the frequency domain. The soft mask cancels the transfer function between sources and separated signals. A theoretical analysis of selection criteria and the soft mask is given. Performance and effectiveness are evaluated via source separation simulations and a computational estimate, and experimental results show the significantly improved performance of the proposed method. The segmental signal-to-noise ratio achieves 7 [dB] and 3 [dB], and the cepstral distortion achieves 1 [dB] and 2.5 [dB], in anechoic and reverberant conditions, respectively. Moreover, computational complexity is reduced by more than 80% compared with unmodified FDICA.

  3. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms

    Science.gov (United States)

    1990-09-01

    0 Technical Report 911 D~i. FiLE COPY Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative Computer-Aided Threat...63007A 793 1202 HI 11. TITLE (Include Security Classification) Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative...SECURITY CLASSIFICATION OF THIS PAGE("wn Data Entered) ii Technical Report 911 Computer Simulation Modeling : A Method for Predicting the Utilities of

  4. 75 FR 24973 - United States v. Baker Hughes Inc., et al.

    Science.gov (United States)

    2010-05-06

    .... IV. Trade and Commerce A. Background 1. Overview of Drilling and Completion Process 11. Offshore... impose on each other's pricing. Post merger, Baker Hughes will likely find it profitable to raise the... transaction may present a profitable opportunity to remove one or two vessels from the Gulf, an opportunity...

  5. The Political Economy of Rhetorical Style: Hugh Blair's Response to the Civic-Commercial Dilemma

    Science.gov (United States)

    Longaker, Mark Garrett

    2008-01-01

    Recent scholarship treats Hugh Blair's "Lectures" on Rhetoric and Belles Lettres (1783) as an effort to endorse either the liberal or the civic political traditions in eighteenth-century Scotland. This essay questions this orthodoxy by reading the "Lectures", and in particular Blair's attention to considerations of rhetorical…

  6. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model

    DEFF Research Database (Denmark)

    Jensen, Anders Christian; Ditlevsen, Susanne; Kessler, Mathieu

    2012-01-01

    Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate m...

  7. Principles and Practices Fostering Inclusive Excellence: Lessons from the Howard Hughes Medical Institute's Capstone Institutions

    Science.gov (United States)

    DiBartolo, Patricia Marten; Gregg-Jolly, Leslie; Gross, Deborah; Manduca, Cathryn A.; Iverson, Ellen; Cooke, David B., III; Davis, Gregory K.; Davidson, Cameron; Hertz, Paul E.; Hibbard, Lisa; Ireland, Shubha K.; Mader, Catherine; Pai, Aditi; Raps, Shirley; Siwicki, Kathleen; Swartz, Jim E.

    2016-01-01

    Best-practices pedagogy in science, technology, engineering, and mathematics (STEM) aims for inclusive excellence that fosters student persistence. This paper describes principles of inclusivity across 11 primarily undergraduate institutions designated as Capstone Awardees in Howard Hughes Medical Institute's (HHMI) 2012 competition. The Capstones…

  8. The Political Economy of Rhetorical Style: Hugh Blair's Response to the Civic-Commercial Dilemma

    Science.gov (United States)

    Longaker, Mark Garrett

    2008-01-01

    Recent scholarship treats Hugh Blair's "Lectures" on Rhetoric and Belles Lettres (1783) as an effort to endorse either the liberal or the civic political traditions in eighteenth-century Scotland. This essay questions this orthodoxy by reading the "Lectures", and in particular Blair's attention to considerations of rhetorical style, against their…

  9. "Comments on Greenhow, Robelia, and Hughes": Expanding the New Literacies Conversation

    Science.gov (United States)

    Leu, Donald J.; O'Byrne, W. Ian; Zawilinski, Lisa; McVerry, J. Greg; Everett-Cacopardo, Heidi

    2009-01-01

    Using a popularized notion such as Web 2.0 limits research efforts by employing a binary construct, one initially prompted by commercial concerns. Instead, the authors of this article, commenting on Greenhow, Robelia, and Hughes (2009), suggest that continuous, not dichotomous, change in the technologies of literacy and learning defines the…

  10. Computational Methods for Predictive Simulation of Stochastic Turbulence Systems

    Science.gov (United States)

    2015-11-05

    AFRL-AFOSR-VA-TR-2015-0363 Computational Methods for Predictive Simulation of Stochastic Turbulence Systems Catalin Trenchea UNIVERSITY OF PITTSBURGH...STOCHASTIC TURBULENCE SYSTEMS AFOSR GRANT FA 9550-12-1-0191 William Layton and Catalin Trenchea Department of Mathematics University of Pittsburgh...During Duration of Grant Nan Jian Graduate student, Univ . of Pittsburgh (currently Postdoc at FSU) Sarah Khankan Graduate student, Univ . of Pittsburgh

  11. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  12. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  13. A hierarchical method for molecular docking using cloud computing.

    Science.gov (United States)

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs.

  14. A Review of Computational Methods for Predicting Drug Targets.

    Science.gov (United States)

    Huang, Guohua; Yan, Fengxia; Tan, Duoduo

    2016-11-14

    Drug discovery and development is not only a time-consuming and labor-intensive process but also full of risk. Identifying targets of small molecules helps evaluate safety of drugs and find new therapeutic applications. The biotechnology measures a wide variety of properties related to drug and targets from different perspectives, thus generating a large body of data. This undoubtedly provides a solid foundation to explore relationships between drugs and targets. A large number of computational techniques have recently been developed for drug target prediction. In this paper, we summarize these computational methods and classify them into structure-based, molecular activity-based, side-effect-based and multi-omics-based predictions according to the used data for inference. The multi-omics-based methods are further grouped into two types: classifier-based and network-based predictions. Furthermore,the advantages and limitations of each type of methods are discussed. Finally, we point out the future directions of computational predictions for drug targets.

  15. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  16. Probabilistic Methods in Multi-Class Brain-Computer Interface

    Institute of Scientific and Technical Information of China (English)

    Ping Yang; Xu Lei; Tie-Jun Liu; Peng Xu; De-Zhong Yao

    2009-01-01

    Two probabilistic methods are extended to research multi-class motor imagery of brain-computer interface (BCI):support vector machine (SVM) with posteriori probability (PSVM) and Bayesian linear dis-criminant analysis with probabilistic output (PBLDA).A comparative evaluation of these two methods is conducted.The results shows that:1) probabilistic information can improve the performance of BCI for subjects with high kappa coefficient,and 2) PSVM usually results in a stable kappa coefficient whereas PBLDA is more efficient in estimating the model parameters.

  17. A comparative study of two computer-aided measurement methods

    OpenAIRE

    Gronau, Franziska

    2010-01-01

    Growth and developmental disorders of the elbow joint are frequent causes of lameness of the thoracic limb of the dog. Golden Retriever is one of the mainly affected breeds. Two different computer-aided methods of measurement will be compared in this study. The aim is to find out whether one of these measurement methods is more suitable to distinguish affected from unaffected joints and to recognize a possible predisposition for elbow dysplasia (ED). X-Rays of the elbow joints in the medio-la...

  18. A scalable method for computing quadruplet wave-wave interactions

    Science.gov (United States)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  19. Computational methods for the innovative design of electrical devices

    Energy Technology Data Exchange (ETDEWEB)

    Wiak, Slawomir [Technical Univ. Lodz (Poland). Inst. of Mechatronics and Information; Napieralska-Juszczak, Ewa (eds.) [Univ. d' Artois, Bethune (FR). Lab. Systemes Electrotechniques et Environnement (LSEE)

    2010-07-01

    Computational Methods for the Innovative Design of Electrical Devices is entirely focused on the optimal design of various classes of electrical devices. Emerging new methods, like e.g. those based on genetic algorithms, are presented and applied in the design optimization of different devices and systems. Accordingly, the solution to field analysis problems is based on the use of finite element method, and analytical methods as well. An original aspect of the book is the broad spectrum of applications in the area of electrical engineering, especially electrical machines. This way, traditional design criteria of conventional devices are revisited in a critical way, and some innovative solutions are suggested. In particular, the optimization procedures developed are oriented to three main aspects: shape design, material properties identification, machine optimal behaviour. Topics covered include: - New parallel finite-element solvers - Response surface method - Evolutionary computing - Multiobjective optimization - Swarm intelligence - MEMS applications - Identification of magnetic properties of anisotropic laminations - Neural networks for non-destructive testing - Brushless DC motors, transformers - Permanent magnet disc motors, magnetic separators - Magnetic levitation systems (orig.)

  20. Approximation method to compute domain related integrals in structural studies

    Science.gov (United States)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the

  1. COMSAC: Computational Methods for Stability and Control. Part 2

    Science.gov (United States)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  2. Computational methods of the Advanced Fluid Dynamics Model

    Energy Technology Data Exchange (ETDEWEB)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  3. Computational Methods for Multi-dimensional Neutron Diffusion Problems

    Energy Technology Data Exchange (ETDEWEB)

    Song Han

    2009-10-15

    Lead-cooled fast reactor (LFR) has potential for becoming one of the advanced reactor types in the future. Innovative computational tools for system design and safety analysis on such NPP systems are needed. One of the most popular trends is coupling multi-dimensional neutron kinetics (NK) with thermal-hydraulic (T-H) to enhance the capability of simulation of the NPP systems under abnormal conditions or during rare severe accidents. Therefore, various numerical methods applied in the NK module should be reevaluated to adapt the scheme of coupled code system. In the author's present work a neutronic module for the solution of two dimensional steady-state multigroup diffusion problems in nuclear reactor cores is developed. The module can produce both direct fluxes as well as adjoints, i.e. neutron importances. Different numerical schemes are employed. A standard finite-difference (FD) approach is firstly implemented, mainly to serve as a reference for less computationally challenging schemes, such as transverse-integrated nodal methods (TINM) and boundary element methods (BEM), which are considered in the second part of the work. The validation of the methods proposed is carried out by comparisons of the results for some reference structures. In particular a critical problem for a homogeneous reactor for which an analytical solution exists is considered as a benchmark. The computational module is then applied to a fast spectrum system, having physical characteristics similar to the proposed European Lead-cooled System (ELSY) project. The results show the effectiveness of the numerical techniques presented. The flexibility and the possibility to obtain neutron importances allow the use of the module for parametric studies, design assessments and integral parameter evaluations, as well as for future sensitivity and perturbation analyses and as a shape solver for time-dependent procedures

  4. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  5. Computation of multi-material interactions using point method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Duan Z [Los Alamos National Laboratory; Ma, Xia [Los Alamos National Laboratory; Giguere, Paul T [Los Alamos National Laboratory

    2009-01-01

    Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations

  6. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  7. Automated computational aberration correction method for broadband interferometric imaging techniques.

    Science.gov (United States)

    Pande, Paritosh; Liu, Yuan-Zhi; South, Fredrick A; Boppart, Stephen A

    2016-07-15

    Numerical correction of optical aberrations provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics techniques. In this Letter, we present an automated computational aberration correction method for broadband interferometric imaging techniques. In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The method is validated on both simulated data and experimental data obtained from a tissue phantom, an ex vivo tissue sample, and an in vivo photoreceptor layer of the human retina.

  8. Computer-aided methods of determining thyristor thermal transients

    Energy Technology Data Exchange (ETDEWEB)

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs.

  9. A modified Henyey method for computing radiative transfer hydrodynamics

    Science.gov (United States)

    Karp, A. H.

    1975-01-01

    The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.

  10. The Piecewise Cubic Method (PCM) for computational fluid dynamics

    Science.gov (United States)

    Lee, Dongwook; Faller, Hugues; Reyes, Adam

    2017-07-01

    We present a new high-order finite volume reconstruction method for hyperbolic conservation laws. The method is based on a piecewise cubic polynomial which provides its solutions a fifth-order accuracy in space. The spatially reconstructed solutions are evolved in time with a fourth-order accuracy by tracing the characteristics of the cubic polynomials. As a result, our temporal update scheme provides a significantly simpler and computationally more efficient approach in achieving fourth order accuracy in time, relative to the comparable fourth-order Runge-Kutta method. We demonstrate that the solutions of PCM converges at fifth-order in solving 1D smooth flows described by hyperbolic conservation laws. We test the new scheme on a range of numerical experiments, including both gas dynamics and magnetohydrodynamics applications in multiple spatial dimensions.

  11. The Piecewise Cubic Method (PCM) for Computational Fluid Dynamics

    CERN Document Server

    Lee, Dongwook; Reyes, Adam

    2016-01-01

    We present a new high-order finite volume reconstruction method for hyperbolic conservation laws. The method is based on a piecewise cubic polynomial which provides its solutions a fifth-order accuracy in space. The spatially reconstructed solutions are evolved in time with a fourth-order accuracy by tracing the characteristics of the cubic polynomials. As a result, our temporal update scheme provides a significantly simpler and computationally more efficient approach in achieving fourth order accuracy in time, relative to the comparable fourth-order Runge-Kutta method. We demonstrate that the solutions of PCM converges in fifth-order in solving 1D smooth flows described by hyperbolic conservation laws. We test the new scheme in a range of numerical experiments, including both gas dynamics and magnetohydrodynamics applications in multiple spatial dimensions.

  12. A computationally efficient method for hand-eye calibration.

    Science.gov (United States)

    Zhang, Zhiqiang; Zhang, Lin; Yang, Guang-Zhong

    2017-07-19

    Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand-eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand-eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand-eye calibration methods. We present a computationally efficient iterative method for hand-eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.

  13. Review methods for image segmentation from computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik [Faculty of Science Computer and Mathematics, Universiti Teknologi Mara Malaysia, 40450 Shah Alam Selangor (Malaysia); Mahmud, Rozi [Faculty of Medicine and Health Sciences, Universiti Putra Malaysia 43400 Serdang Selangor (Malaysia)

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  14. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  15. A Lattice-Boltzmann Method for Partially Saturated Computational Cells

    Science.gov (United States)

    Noble, D. R.; Torczynski, J. R.

    The lattice-Boltzmann (LB) method is applied to complex, moving geometries in which computational cells are partially filled with fluid. The LB algorithm is modified to include a term that depends on the percentage of the cell saturated with fluid. The method is useful for modeling suspended obstacles that do not conform to the grid. Another application is to simulations of flow through reconstructed media that are not easily segmented into solid and liquid regions. A detailed comparison is made with FIDAP simulation results for the flow about a periodic line of cylinders in a channel at a non-zero Reynolds number. Two cases are examined. In the first simulation, the cylinders are given a constant velocity along the axis of the channel, and the steady solution is acquired. The transient behavior of the system is then studied by giving the cylinders an oscillatory velocity. For both steady and oscillatory flows, the method provides excellent agreement with FIDAP simulation results, even at locations close to the surface of a cylinder. In contrast to step-like solutions produced using the "bounce-back" condition, the proposed condition gives close agreement with the smooth FIDAP predictions. Computed drag forces with the proposed condition exhibit apparent quadratic convergence with grid refinement rather than the linear convergence exhibited by other LB boundary conditions.

  16. Computational methods for long mean free path problems

    Science.gov (United States)

    Christlieb, Andrew Jason

    This document describes work being done on particle transport in long mean free path environments. Two non statistical computational models are developed based on the method of propagators, which can have significant advantages in accuracy and efficiency over other methods. The first model has been developed primarily for charged particle transport and the second primarily for neutral particle transport. Both models are intended for application to transport in complex geometry using irregular meshes. The transport model for charged particles was inspired by the notion of obtaining a simulation that could handle complex geometry and resolve the bulk and sheath characteristics of a discharge, in a reasonable amount of computation time. The charged particle transport model has been applied in a self- consistent manner to the ion motion in a low density inductively coupled discharge. The electrons were assumed to have a Boltzmann density distribution for the computation of the electric field. This work assumes cylindrical geometry and focuses on charge exchange collisions as the primary ion collisional effect that takes place in the discharge. The results are compared to fluid simulations. The neutral transport model was constructed to solve the steady state Boltzmann equation on 3-D arbitrary irregular meshes. The neutral transport model was developed with the intent of investigating gas glow on the scale of micro-electrical-mechanical systems (MEMS), and is meant for tracking multiple species. The advantage of these methods is that the step size is determined by the mean free path of the particles rather than the mesh employed in the simulation.

  17. Hugh E. Huxley: cambiando el paradigma de la contracción muscular, desde dentro. [Hugh E. Huxley: changing from inside the paradigm of muscle contraction].

    OpenAIRE

    Adolfo Araci

    2014-01-01

    Desde un punto de vista físico se puede entender la fibra muscular como un motor, es decir, un sistema capaz de transformar la energía química en energía mecánica, que se utiliza para realizar un trabajo. Por lo tanto, para entender cómo ocurre dicho proceso de transformación es necesario conocer la ultraestructura de la fibra muscular. Esta es, sin duda, la principal aportación al acervo científico del recientemente fallecido Hugh Emor Huxley (1924-2013). Huxley se graduó en Física en el Chr...

  18. Hugh E. Huxley: cambiando el paradigma de la contracción muscular, desde dentro. [Hugh E. Huxley: changing from inside the paradigm of muscle contraction].

    OpenAIRE

    Adolfo Araci

    2014-01-01

    Desde un punto de vista físico se puede entender la fibra muscular como un motor, es decir, un sistema capaz de transformar la energía química en energía mecánica, que se utiliza para realizar un trabajo. Por lo tanto, para entender cómo ocurre dicho proceso de transformación es necesario conocer la ultraestructura de la fibra muscular. Esta es, sin duda, la principal aportación al acervo científico del recientemente fallecido Hugh Emor Huxley (1924-2013). Huxley se graduó en Física en el Chr...

  19. Method and apparatus for managing transactions with connected computers

    Science.gov (United States)

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  20. Numerical methods of computation of singular and hypersingular integrals

    Directory of Open Access Journals (Sweden)

    I. V. Boikov

    2001-01-01

    and technology one is faced with necessity of calculating different singular integrals. In analytical form calculation of singular integrals is possible only in unusual cases. Therefore approximate methods of singular integrals calculation are an active developing direction of computing in mathematics. This review is devoted to the optimal with respect to accuracy algorithms of the calculation of singular integrals with fixed singularity, Cauchy and Hilbert kernels, polysingular and many-dimensional singular integrals. The isolated section is devoted to the optimal with respect to accuracy algorithms of the calculation of the hypersingular integrals.

  1. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....

  2. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    Science.gov (United States)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  3. Computational methods for studying G protein-coupled receptors (GPCRs).

    Science.gov (United States)

    Kaczor, Agnieszka A; Rutkowska, Ewelina; Bartuzi, Damian; Targowska-Duda, Katarzyna M; Matosiuk, Dariusz; Selent, Jana

    2016-01-01

    The functioning of GPCRs is classically described by the ternary complex model as the interplay of three basic components: a receptor, an agonist, and a G protein. According to this model, receptor activation results from an interaction with an agonist, which translates into the activation of a particular G protein in the intracellular compartment that, in turn, is able to initiate particular signaling cascades. Extensive studies on GPCRs have led to new findings which open unexplored and exciting possibilities for drug design and safer and more effective treatments with GPCR targeting drugs. These include discovery of novel signaling mechanisms such as ligand promiscuity resulting in multitarget ligands and signaling cross-talks, allosteric modulation, biased agonism, and formation of receptor homo- and heterodimers and oligomers which can be efficiently studied with computational methods. Computer-aided drug design techniques can reduce the cost of drug development by up to 50%. In particular structure- and ligand-based virtual screening techniques are a valuable tool for identifying new leads and have been shown to be especially efficient for GPCRs in comparison to water-soluble proteins. Modern computer-aided approaches can be helpful for the discovery of compounds with designed affinity profiles. Furthermore, homology modeling facilitated by a growing number of available templates as well as molecular docking supported by sophisticated techniques of molecular dynamics and quantitative structure-activity relationship models are an excellent source of information about drug-receptor interactions at the molecular level.

  4. Multigrid Methods for the Computation of Propagators in Gauge Fields

    Science.gov (United States)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  5. Computational analysis of methods for reduction of induced drag

    Science.gov (United States)

    Janus, J. M.; Chatterjee, Animesh; Cave, Chris

    1993-01-01

    The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.

  6. Computer vision analysis of image motion by variational methods

    CERN Document Server

    Mitiche, Amar

    2014-01-01

    This book presents a unified view of image motion analysis under the variational framework. Variational methods, rooted in physics and mechanics, but appearing in many other domains, such as statistics, control, and computer vision, address a problem from an optimization standpoint, i.e., they formulate it as the optimization of an objective function or functional. The methods of image motion analysis described in this book use the calculus of variations to minimize (or maximize) an objective functional which transcribes all of the constraints that characterize the desired motion variables. The book addresses the four core subjects of motion analysis: Motion estimation, detection, tracking, and three-dimensional interpretation. Each topic is covered in a dedicated chapter. The presentation is prefaced by an introductory chapter which discusses the purpose of motion analysis. Further, a chapter is included which gives the basic tools and formulae related to curvature, Euler Lagrange equations, unconstrained de...

  7. Convex Optimization methods for computing the Lyapunov Exponent of matrices

    CERN Document Server

    Protasov, Vladimir Yu

    2012-01-01

    We introduce a new approach to evaluate the largest Lyapunov exponent of a family of nonnegative matrices. The method is based on using special positive homogeneous functionals on $R^{d}_+,$ which gives iterative lower and upper bounds for the Lyapunov exponent. They improve previously known bounds and converge to the real value. The rate of convergence is estimated and the efficiency of the algorithm is demonstrated on several problems from applications (in functional analysis, combinatorics, and lan- guage theory) and on numerical examples with randomly generated matrices. The method computes the Lyapunov exponent with a prescribed accuracy in relatively high dimensions (up to 60). We generalize this approach to all matrices, not necessar- ily nonnegative, derive a new universal upper bound for the Lyapunov exponent, and show that such a lower bound, in general, does not exist.

  8. Computational aeroacoustics applications based on a discontinuous Galerkin method

    Science.gov (United States)

    Delorme, Philippe; Mazet, Pierre; Peyret, Christophe; Ventribout, Yoan

    2005-09-01

    CAA simulation requires the calculation of the propagation of acoustic waves with low numerical dissipation and dispersion error, and to take into account complex geometries. To give, at the same time, an answer to both challenges, a Discontinuous Galerkin Method is developed for Computational AeroAcoustics. Euler's linearized equations are solved with the Discontinuous Galerkin Method using flux splitting technics. Boundary conditions are established for rigid wall, non-reflective boundary and imposed values. A first validation, for induct propagation is realized. Then, applications illustrate: the Chu and Kovasznay's decomposition of perturbation inside uniform flow in term of independent acoustic and rotational modes, Kelvin-Helmholtz instability and acoustic diffraction by an air wing. To cite this article: Ph. Delorme et al., C. R. Mecanique 333 (2005).

  9. Data graphing methods, articles of manufacture, and computing devices

    Science.gov (United States)

    Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.; Foote, Harlan P.; Whiting, Mark A.

    2016-12-13

    Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arranging the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.

  10. Improved Classification Methods for Brain Computer Interface System

    Directory of Open Access Journals (Sweden)

    YI Fang

    2012-03-01

    Full Text Available Brain computer interface (BCI aims at providing a new communication way without brain’s normal output through nerve and muscle. The electroencephalography (EEG has been widely used for BCI system because it is a non-invasive approach. For the EEG signals of left and right hand motor imagery, the event-related desynchronization (ERD and event-related synchronization(ERS are used as classification features in this paper. The raw data are transformed by nonlinear methods and classified by Fisher classifier. Compared with the linear methods, the classification accuracy can get an obvious increase to 86.25%. Two different nonlinear transform were arised and one of them is under the consideration of the relativity of two channels of EEG signals. With these nonlinear transform, the performance are also stable with the balance of two misclassifications.

  11. Computational method for discovery of estrogen responsive genes

    DEFF Research Database (Denmark)

    Tang, Suisheng; Tan, Sin Lam; Ramadoss, Suresh Kumar;

    2004-01-01

    Estrogen has a profound impact on human physiology and affects numerous genes. The classical estrogen reaction is mediated by its receptors (ERs), which bind to the estrogen response elements (EREs) in target gene's promoter region. Due to tedious and expensive experiments, a limited number...... of human genes are functionally well characterized. It is still unclear how many and which human genes respond to estrogen treatment. We propose a simple, economic, yet effective computational method to predict a subclass of estrogen responsive genes. Our method relies on the similarity of ERE frames...... across different promoters in the human genome. Matching ERE frames of a test set of 60 known estrogen responsive genes to the collection of over 18,000 human promoters, we obtained 604 candidate genes. Evaluating our result by comparison with the published microarray data and literature, we found...

  12. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  13. Implicit extrapolation methods for multilevel finite element computations

    Energy Technology Data Exchange (ETDEWEB)

    Jung, M.; Ruede, U. [Technische Universitaet Chemnitz-Zwickau (Germany)

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  14. Computer Anxiety and Students' Preferred Feedback Methods in EFL Writing

    Science.gov (United States)

    Matsumura, Shoichi; Hann, George

    2004-01-01

    Computer-mediated instruction plays a significant role in foreign language education. The incorporation of computer technology into the classroom has also been accompanied by an increasing number of students who experience anxiety when interacting with computers. This study examined the effects of computer anxiety on students' choice of feedback…

  15. The fusion of the imagination and the material universe: Hugh Hood, Flying a red kite (1962

    Directory of Open Access Journals (Sweden)

    Aleksander Kustec

    1999-12-01

    Full Text Available In the first weeks of January 1957 Hugh (John Blagdon Hood moved to Hartford, Connecticut, U. S. A, where he became Professor at Saint Joseph College. At the time Hood did not know that he was going to become one of Canada's greatest stylists and contemporary short story writers. Between January 1957 and March 1962, when he was going through a final selection o short stories for his first book, Flying a Red Kite (FRK, Hugh Hood wrote thirty-eight short stories and two novels (God Rest You Merry and Hungry Generations. The numbers show that Hood was an extremely productive writer in that period. From the thirty-eight stories he chose eleven for FRK, fourteen of them were published in subsequent collections, in various journals and short story anthologies, while thirteen of them have not been published yet.

  16. The 2016 Hughes Lecture: What's new in maternal morbidity and mortality?

    Science.gov (United States)

    Arendt, K W

    2016-05-01

    Each year, the Board of Directors of the Society for Obstetric Anesthesia and Perinatology selects an individual to review a given year's published obstetric anesthesiology literature. This individual then produces a syllabus of the year's most influential publications, delivers the Ostheimer Lecture at the Society's annual meeting, the Hughes Lecture at the following year's Sol Shnider meeting, and writes corresponding review articles. This 2016 Hughes Lecture review article focuses specifically on the 2014 publications that relate to maternal morbidity and mortality. It begins by discussing the 2014 research that was published on severe maternal morbidity and maternal mortality in developed countries. This is followed by a discussion of specific coexisting diseases and specific causes of severe maternal mortality. The review ends with a discussion of worldwide maternal mortality and the 2014 publications that examined the successes and the shortfalls in the work to make childbirth safe for women throughout the entire world.

  17. Two-frequency self-oscillations in a FitzHugh-Nagumo neural network

    Science.gov (United States)

    Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.

    2017-01-01

    A new mathematical model of a one-dimensional array of FitzHugh-Nagumo neurons with resistive-inductive coupling between neighboring elements is proposed. The model relies on a chain of diffusively coupled three-dimensional systems of ordinary differential equations. It is shown that any finite number of coexisting stable invariant two-dimensional tori can be obtained in this chain by suitably increasing the number of its elements.

  18. Selected results from LLNL-Hughes RAR for West Coast Scotland Experiment 1991

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S K; Johnston, B; Twogood, R; Wieting, M; Yorkey, T; Robey, H [Lawrence Livermore National Lab., CA (United States); Whelan, D; Nagele, R [Hughes Aircraft Co., Los Angeles, CA (United States)

    1993-01-05

    The joint US -- UK 1991 West Coast Scotland Experiment (WCSEX) was held in two locations. From July 5 to 12, 1991, in Upper Loch Linnhe, and from July 18 to July 26, 1991, in the Sound of Sleat. The LLNL-Hughes team fielded a fully polarimetric X-band hill-side real aperture radar to collect internal wave wake data. We present here a sample data set of the best radar runs.

  19. Symmetric bursting behaviors in the generalized FitzHugh-Nagumo model.

    Science.gov (United States)

    Abbasian, A H; Fallah, H; Razvan, M R

    2013-08-01

    In the current paper, we have investigated the generalized FitzHugh-Nagumo model. We have shown that symmetric bursting behaviors of different types could be observed in this model with an appropriate recovery term. A modified version of this system is used to construct bursting activities. Furthermore, we have shown some numerical examples of delayed Hopf bifurcation and canard phenomenon in the symmetric bursting of super-Hopf/homoclinic type near its super-Hopf and homoclinic bifurcations, respectively.

  20. 耦合的Fitz-Hugh-Nagumo模型的初边值问题

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    首先研究了具有耦合的 Fitz-Hugh-Nagumo 模型初值问题的渐近性. 其次,所对应的平衡态方程解的基本性态也将被讨论. 最后, 该模型的初边值问题古典解的最大吸引子的存在性将首次被证明.

  1. The Hughes Court and Radical Dissent: The Case of Dirk De Jonge and Angelo Herndon

    OpenAIRE

    Tushnet, Mark V.

    2012-01-01

    Scattered Supreme Court decisions in the early twentieth century dealt with the Constitution’s protection of freedom of speech. Radical dissent over United States participation in World War I and the nation’s intervention against the Bolshevik revolution in Russia led the Court to its first sustained engagement with free speech cases. By the time Chief Justice Hughes took the center chair, the national government largely had abandoned its pursuit of radical dissenters, some of whom played la...

  2. Percutaneous transhepatic venous embolization of pulmonary artery aneurysm in Hughes-Stovin syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Ah; Kim, Man Deuk; Oh, Do Yun; Park, Pil Won [Bundang CHA General Hospital, Pochon CHA University, Seongnam (Korea, Republic of)

    2007-08-15

    Hughes-Stovin syndrome is an extremely rare entity. We present a case of a 42-year-old man, who developed deep vein and inferior vena cava (IVC) thrombosis, repeated internal bleeding and pulmonary artery aneurysms (PAAs). The patient presented with massive hemoptysis and with PAAs of a 2.5 cm maximum diameter. We describe the successful percutaneous transhepatic venous embolization of the PAAs due to occluded common vascular pathways to the pulmonary artery.

  3. Innovative methods of teaching computer science on the basis of "dynamic method" communicative dialogue

    Directory of Open Access Journals (Sweden)

    Janna Ivanova

    2013-05-01

    Full Text Available The questions about the mechanism of improving the efficiency of knowledge transfer in the traditional form of learning using an interactive method of communication in the form of discussions, with a view to a better mastery of the material in conducting classes in computer science.

  4. A política de Hugh Dalton e o Bloqueio Económico (1940-1942)

    OpenAIRE

    Mariana Castro

    2015-01-01

    The World War II was a scenery of conflict and protagonists. Hugh Dalton was Minister of Economics of War between 1940 and 1942 in England. Highlighted by the measure in economic blockade the belligerents and neutral countries, proposing the control of smuggling through the prohibition of exports and imports with the German enemy, in other words ‘dry internally and externally “to weaken. This essay focuses on Hugh Dalton policy, leaving few clues and suggestions for further study ...

  5. Atomistic Method Applied to Computational Modeling of Surface Alloys

    Science.gov (United States)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on

  6. An elastic mechanics model and computation method for geotechnical material

    Institute of Scientific and Technical Information of China (English)

    Zheng Yingren; Gao Hong; Zheng Lushi

    2010-01-01

    Internal friction characteristic is one of the basic properties of geotechnical materials and it exists in mechanical elements all the time.However,until now internal friction is only considered in limit analysis and plastic mechanics but not included in elastic theory for rocks and soils.We consider that internal friction exists in both elastic state and plastic state of geotechnical materials,so the mechanical unit of friction material is constituted.Based on study results of soil tests,the paper also proposes that cohesion takes effect first and internal friction works gradually with the increment of deformation.By assuming that the friction coefficient is proportional to the strain,the internal friction is computed.At last,by imitating the linear elastic mechanics,the nonlinear elastic mechanics model of friction material is established,where the shear modulus G is not a constant.The new model and the traditional elastic model are used simultaneously to analyze an elastic foundation.The results indicate that the displacements computed by the new model are less than those from the traditional method,which agrees with the fact and shows that the mechanical units of friction material are suitable for geotechnical material.

  7. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  8. Software Defects, Scientific Computation and the Scientific Method

    CERN Document Server

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  9. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    -friendly system, which will make the model development process easier and faster and provide the way for unified and consistent model documentation. The modeller can use the template for their specific problem or to extend and/or adopt a model. This is based on the idea of model reuse, which emphasizes the use...... and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer...... aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation; 4) model transfer – export/import to/from other application for further extension and application – several types of formats, such as XML...

  10. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  11. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  12. Computational alanine scanning with linear scaling semiempirical quantum mechanical methods.

    Science.gov (United States)

    Diller, David J; Humblet, Christine; Zhang, Xiaohua; Westerhoff, Lance M

    2010-08-01

    Alanine scanning is a powerful experimental tool for understanding the key interactions in protein-protein interfaces. Linear scaling semiempirical quantum mechanical calculations are now sufficiently fast and robust to allow meaningful calculations on large systems such as proteins, RNA and DNA. In particular, they have proven useful in understanding protein-ligand interactions. Here we ask the question: can these linear scaling quantum mechanical methods developed for protein-ligand scoring be useful for computational alanine scanning? To answer this question, we assembled 15 protein-protein complexes with available crystal structures and sufficient alanine scanning data. In all, the data set contains Delta Delta Gs for 400 single point alanine mutations of these 15 complexes. We show that with only one adjusted parameter the quantum mechanics-based methods outperform both buried accessible surface area and a potential of mean force and compare favorably to a variety of published empirical methods. Finally, we closely examined the outliers in the data set and discuss some of the challenges that arise from this examination.

  13. Matching wind turbine rotors and loads: Computational methods for designers

    Science.gov (United States)

    Seale, J. B.

    1983-04-01

    A comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications was reported. A method was developed to convert the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) it is decided how turbine power is to be governed to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics are used to predict longterm energy output. Most systems are approximated by a graph and calculator approach. The method leads to energy predictions, and to insight into modeled processes. A computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out with in depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps; including three different load compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  14. Computational methods for the detection of cis-regulatory modules.

    Science.gov (United States)

    Van Loo, Peter; Marynen, Peter

    2009-09-01

    Metazoan transcription regulation occurs through the concerted action of multiple transcription factors that bind co-operatively to cis-regulatory modules (CRMs). The annotation of these key regulators of transcription is lagging far behind the annotation of the transcriptome itself. Here, we give an overview of existing computational methods to detect these CRMs in metazoan genomes. We subdivide these methods into three classes: CRM scanners screen sequences for CRMs based on predefined models that often consist of multiple position weight matrices (PWMs). CRM builders construct models of similar CRMs controlling a set of co-regulated or co-expressed genes. CRM genome screeners screen sequences or complete genomes for CRMs as homotypic or heterotypic clusters of binding sites for any combination of transcription factors. We believe that CRM scanners are currently the most advanced methods, although their applicability is limited. Finally, we argue that CRM builders that make use of PWM libraries will benefit greatly from future advances and will prove to be most instrumental for the annotation of regulatory regions in metazoan genomes.

  15. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...

  16. A comparison of computational methods for identifying virulence factors.

    Directory of Open Access Journals (Sweden)

    Lu-Lu Zheng

    Full Text Available Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them.

  17. Computational methods in decision-making, economics and finance

    CERN Document Server

    Rustem, Berc; Siokos, Stavros

    2002-01-01

    Computing has become essential for the modeling, analysis, and optimization of systems This book is devoted to algorithms, computational analysis, and decision models The chapters are organized in two parts optimization models of decisions and models of pricing and equilibria

  18. A computational method for dislocation-precipitate interaction

    Science.gov (United States)

    Takahashi, Akiyuki; Ghoniem, Nasr M.

    A new computational method for the elastic interaction between dislocations and precipitates is developed and applied to the solution of problems involving dislocation cutting and looping around precipitates. Based on the superposition principle, the solution to the dislocation-precipitate interaction problem is obtained as the sum of two solutions: (1) a dislocation problem with image stresses from interfaces between the dislocation and the precipitate, and (2) a correction solution for the elastic problem of a precipitate with an initial strain distribution. The current development is based on a combination of the parametric dislocation dynamics (PDD) and the boundary element method (BEM) with volume integrals.The method allows us to calculate the stress field both inside and outside precipitates of elastic moduli different from the matrix, and that may have initial coherency strain fields. The numerical results of the present method show good convergence and high accuracy when compared to a known analytical solution, and they are also in good agreement with molecular dynamics (MD) simulations. Sheared copper precipitates (2.5 nm in diameter) are shown to lose some of their resistance to dislocation motion after they are cut by leading dislocations in a pileup. Successive cutting of precipitates by the passage of a dislocation pileup reduces the resistance to about half its original value, when the number of dislocations in the pileup exceeds about 10. The transition from the shearable precipitate regime to the Orowan looping regime occurs for precipitate-to-matrix elastic modulus ratios above approximately 3-4, with some dependence on the precipitate size. The effects of precipitate size, spacing, and elastic modulus mismatch with the host matrix on the critical shear stress (CSS) to dislocation motion are presented.

  19. Non-unitary probabilistic quantum computing circuit and method

    Science.gov (United States)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  20. Global Seabed Materials and Habitats Mapped: The Computational Methods

    Science.gov (United States)

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  1. Semi-coarsening multigrid methods for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  2. Comparison of different additive manufacturing methods using computed tomography

    Directory of Open Access Journals (Sweden)

    Paras Shah

    2016-11-01

    Full Text Available Additive manufacturing (AM allows for fast fabrication of three dimensional objects with the use of considerably less resources, less energy consumption and shorter supply chain than would be the case in traditional manufacturing. AM has gained significance due to its cost effective method which boasts the ability to produce components with a previously unachievable level of geometric complexity in prototyping and end user industrial applications, such as aerospace, automotive and medical industries. However these processes currently lack reproducibility and repeatability with some ‘prints’ having a high probability of requiring rework or even scrapping due to out of specification or high porosity levels, leading to failure due to structural stresses. It is therefore imperative that robust quality systems be implemented such that the waste level of these processes can be significantly decreased. This study presents an artefact that is optimised for characterisation of form using computed tomography (CT with representative geometric dimensioning and tolerancing features and internal channels and structures comparable to cooling channels in heat exchangers. Furthermore the optimisation of the CT acquisition conditions for this artefact are presented in light of feature dimensions and form analysis. This paper investigates the accuracy and capability of CT measurements compared with reference measurements from coordinate measuring machine (CMM, as well as focus on the evaluation of different AM methods.

  3. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  4. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  5. Method for fast computation of angular light scattering spectra from 2D periodic arrays

    CERN Document Server

    Pomplun, J; Zschiedrich, L; Gutsche, P; Schmidt, F

    2016-01-01

    An efficient numerical method for computing angle-resolved light scattering off periodic arrays is presented. The method combines finite-element discretization with a Schur complement solver. A significant speed-up of the computations in comparison to standard finite-element method computations is observed.

  6. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of...

  7. Cell method a purely algebraic computational method in physics and engineering

    CERN Document Server

    Ferretti, Elena

    2014-01-01

    The Cell Method (CM) is a computational tool that maintains criticalmultidimensional attributes of physical phenomena in analysis. Thisinformation is neglected in the differential formulations of the classicalapproaches of finite element, boundary element, finite volume,and finite difference analysis, often leading to numerical instabilitiesand spurious results.This book highlights the central theoretical concepts of the CM thatpreserve a more accurate and precise representation of the geometricand topological features of variables for practical problem solving.Important applications occur in

  8. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  9. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  10. THE METHOD OF DESIGNING ASSISTED ON COMPUTER OF THE

    Directory of Open Access Journals (Sweden)

    LUCA Cornelia

    2015-05-01

    Full Text Available To the base of the footwear soles designing, is the shoe last. The shoe lasts have irregular shapes, with various curves witch can’t be represented by a simple mathematic function. In order to design the footwear’s soles it’s necessary to take from the shoe last some base contours. These contours are obtained with high precision in a 3D CAD system. In the paper, it will be presented a method of designing of the soles for footwear, computer assisted. The copying process of the shoe last is done using the 3D digitizer. For digitizing, the shoe last spatial shape is positioned on the peripheral of data gathering, witch follows automatically the shoe last’s surface. The wire network obtained through digitizing is numerically interpolated with the interpolator functions in order to obtain the spatial numerical shape of the shoe last. The 3D designing of the sole will be realized on the numerical shape of the shoe last following the next steps: the manufacture of the sole’s surface, the lateral surface realization of the sole’s shape, obtaining the link surface between the lateral side and the planner one of the sole, of the sole’s margin, the sole’s designing contains the skid proof area. The main advantage of the designing method is the design precision, visualization in 3D space of the sole and the possibility to take the best decision viewing the acceptance of new sole’s pattern.

  11. Computational Fluid Dynamics Methods and Their Applications in Medical Science

    OpenAIRE

    Kowalewski Wojciech; Roszak Magdalena; Kołodziejczak Barbara; Ren-Kurc Anna; Bręborowicz Andrzej

    2016-01-01

    As defined by the National Institutes of Health: “Biomedical engineering integrates physical, chemical, mathematical, and computational sciences and engineering principles to study biology, medicine, behavior, and health”. Many issues in this area are closely related to fluid dynamics. This paper provides an overview of the basic concepts concerning Computational Fluid Dynamics and its applications in medicine.

  12. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  13. A "Queen of Hearts" trial of organ markets: why Scheper-Hughes's objections to markets in human organs fail.

    Science.gov (United States)

    Taylor, J S

    2007-04-01

    Nancy Scheper-Hughes is one of the most prominent critics of markets in human organs. Unfortunately, Scheper-Hughes rejects the view that markets should be used to solve the current (and chronic) shortage of transplant organs without engaging with the arguments in favour of them. Scheper-Hughes's rejection of such markets is of especial concern, given her influence over their future, for she holds, among other positions, the status of an adviser to the World Health Organization (Geneva) on issues related to global transplantation. Given her influence, it is important that Scheper-Hughes's moral condemnation of markets in human organs be subject to critical assessment. Such critical assessment, however, has not generally been forthcoming. A careful examination of Scheper-Hughes's anti-market stance shows that it is based on serious mischaracterisations of both the pro-market position and the medical and economic realities that underlie it. In this paper, the author will expose and correct these mischaracterisations and, in so doing, show that her objections to markets in human organs are unfounded.

  14. HARP model rotor test at the DNW. [Hughes Advanced Rotor Program

    Science.gov (United States)

    Dawson, Seth; Jordan, David; Smith, Charles; Ekins, James; Silverthorn, Lou

    1989-01-01

    Data from a test of a dynamically scaled model of the Hughes Advanced Rotor Program (HARP) bearingless model main rotor and 369K tail rotor are reported. The history of the HARP program and its goals are reviewed, and the main and tail rotor models are described. The test facilities and instrumentation are described, and wind tunnel test data are presented on hover, forward flight performance, and blade-vortex interaction. Performance data, acoustic data, and dynamic data from near field/far field and shear layer studies are presented.

  15. The spike timing precision of FitzHugh-Nagumo neuron network coupled by gap junctions

    Institute of Scientific and Technical Information of China (English)

    Zhang Su-Hua; Zhan Yong; Yu Hui; An Hai-Long; Zhao Tong-Jun

    2006-01-01

    It has been proved recently that the spike timing can play an important role in information transmission, so in this paper we develop a network with N-unit FitzHugh-Nagumo neurons coupled by gap junctions and discuss the dependence of the spike timing precision on synaptic coupling strength, the noise intensity and the size of the neuron ensemble. The calculated results show that the spike timing precision decreases as the noise intensity increases; and the ensemble spike timing precision increases with coupling strength increasing. The electric synapse coupling has a more important effect on the spike timing precision than the chemical synapse coupling.

  16. Synchronization in heterogeneous FitzHugh-Nagumo networks with hierarchical architecture

    Science.gov (United States)

    Plotnikov, S. A.; Lehnert, J.; Fradkov, A. L.; Schöll, E.

    2016-07-01

    We study synchronization in heterogeneous FitzHugh-Nagumo networks. It is well known that heterogeneities in the nodes hinder synchronization when becoming too large. Here we develop a controller to counteract the impact of these heterogeneities. We first analyze the stability of the equilibrium point in a ring network of heterogeneous nodes. We then derive a sufficient condition for synchronization in the absence of control. Based on these results we derive the controller providing synchronization for parameter values where synchronization without control is absent. We demonstrate our results in networks with different topologies. Particular attention is given to hierarchical (fractal) topologies, which are relevant for the architecture of the brain.

  17. Hub-enhanced noise-sustained synchronization of an externally forced FitzHugh-Nagumo ring

    Science.gov (United States)

    Sánchez, Alejandro D.; Izús, Gonzalo G.; dell'Erba, Matías G.; Deza, Roberto R.

    2017-02-01

    A ring of FitzHugh-Nagumo units with antiphase coupling between their activator fields and submitted to a adiabatic harmonic subthreshold signal, is in turn globally coupled in electrical mode with the activator field of a hub. Noise sustained synchronization of neural activity with the signal is numerically observed, and theoretically characterized. The different dynamical regimes are elucidated using the concept of nonequilibrium potential, and the hub is found to promote network synchronization. The minimum noise intensities triggering the activation and synchronization processes are estimated in the framework of a three-neuron model.

  18. Synchronization and associative memory of FitzHugh-Nagumo neuronal networks with randomly distributed time delays

    Energy Technology Data Exchange (ETDEWEB)

    Peng, J H; Wu, Y J [School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Yu, H J [Department of Mechanics, Shanghai Jiao Tong University, Shanghai 200240 (China)], E-mail: jhpeng@ecust.edu.cn

    2008-02-15

    Synchronization and associative memory in a neural network composed of the widely discussed FitzHugh-Nagumo neurons is investigated in this paper. Based on the reality of the microscopic biological structure in the neural system, the couplings among those neurons are accompanied with randomly distributed time delays which models the times needed for pulses propagating on the axons from the presynaptic neurons to the postsynaptic neurons. The memory is represented in the spatiotemporal firing pattern of the neurons, and the memory retrieval is accomplished with the fluctuations of the noise in the system.

  19. 空间离散Fitz-Hugh-Nagumo方程和双稳反应扩散方程组的渐近行为%Asymptotic Behavioui of Spatial Discretized Fitz-Hugh-Nagumo Equations and Bistable Reaction Diffusion Equations

    Institute of Scientific and Technical Information of China (English)

    黄建华; 路钢

    2000-01-01

    In this paper, we discuss the asymptotic behaviour of spatial disretization of Fitz-Hugh-Nagumo equations and bistable reaction diffusion equations with Neumamn boundary conditions, and the invariant regions, absorbing sets and global attractors are obtained and the estimation of Hausdorff dimension is given.%本文利用扰动方法,研究了Fitz-Hugh-Nagumo方程和双稳反应扩散方程在Neuman边值条件下空间离散后的渐近行为,证明了两个格微分方程组的不变区域、吸引集和整体吸引子的存在性,并给出了离散Fitz-Hugh-Nagumo方程的整体吸引子的Hausdorff维数估计.

  20. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  1. Modelling of dusty plasma properties by computer simulation methods

    Energy Technology Data Exchange (ETDEWEB)

    Baimbetov, F B [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Ramazanov, T S [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Dzhumagulova, K N [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Kadyrsizov, E R [Institute for High Energy Densities of RAS, Izhorskaya 13/19, Moscow 125412 (Russian Federation); Petrov, O F [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Gavrikov, A V [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan)

    2006-04-28

    Computer simulation of dusty plasma properties is performed. The radial distribution functions, the diffusion coefficient are calculated on the basis of the Langevin dynamics. A comparison with the experimental data is made.

  2. Computational Fluid Dynamics. [numerical methods and algorithm development

    Science.gov (United States)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  3. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  4. Accuracy, resolution, and computational complexity of a discontinuous Galerkin finite element method

    NARCIS (Netherlands)

    Ven, van der H.; Vegt, van der J.J.W.; Cockburn, B.; Karniadakis, G.E.; Shu, C.-W.

    2000-01-01

    This series contains monographs of lecture notes type, lecture course material, and high-quality proceedings on topics described by the term "computational science and engineering". This includes theoretical aspects of scientific computing such as mathematical modeling, optimization methods, discret

  5. An alternative computational method for finding the minimum-premium insurance portfolio

    Science.gov (United States)

    Katsikis, Vasilios N.

    2016-06-01

    In this article, we design a computational method, which differs from the standard linear programming techniques, for computing the minimum-premium insurance portfolio. The corresponding algorithm as well as a Matlab implementation are provided.

  6. Pulse propagation and failure in the discrete FitzHugh-Nagumo model subject to high-frequency stimulation.

    Science.gov (United States)

    Ratas, Irmantas; Pyragas, Kestutis

    2012-10-01

    We investigate the effect of a homogeneous high-frequency stimulation (HFS) on a one-dimensional chain of coupled excitable elements governed by the FitzHugh-Nagumo equations. We eliminate the high-frequency term by the method of averaging and show that the averaged dynamics depends on the parameter A=a/ω equal to the ratio of the amplitude a to the frequency ω of the stimulating signal, so that for large frequencies an appreciable effect from the HFS is attained only at sufficiently large amplitudes. The averaged equations are analyzed by an asymptotic theory based on the different time scales of the recovery and excitable variables. As a result, we obtain the main characteristics of a propagating pulse as functions of the parameter A and derive an analytical criterion for the propagation failure. We show that depending on the parameter A, the HFS can either enhance or suppress pulse propagation and reveal the mechanism underlying these effects. The theoretical results are confirmed by numerical simulations of the original system with and without noise.

  7. Reflections about Research in Computer Science regarding the Classification of Sciences and the Scientific Method

    OpenAIRE

    WAZLAWICK, R. S.

    2010-01-01

    This paper presents some observations about Computer Science and the Scientific Method. Initially, the paper discusses the different aspects of Computer Science regarding the classification of sciences. It is observed that different areas inside Computer Science can be classified as different Sciences. The paper presents the main philosophical schools that define what is understood as the Scientific Method, and their influence on Computer Science. Finally, the paper discusses the distinction ...

  8. Montage of a Queering Deferred: Memory, Ownership, and Archival Silencing in the Rhetorical Biography of Langston Hughes.

    Science.gov (United States)

    Summers, Ian

    2016-01-01

    This article explores the intersection of archival privilege and heteronormative bias in the queering of Langston Hughes. Although it has been a common belief in LGBTQ communities that Hughes was gay, the battle over how his sexuality is defined in various biographical texts involves broader issues of dominant representations of sexuality and who gets to speak for those no longer able to speak for themselves. As such, the article examines the texts Looking for Langston and The Life of Langston Hughes as well as the discourses that surrounded both. Through this case study, it is apparent that there are still numerous cultural challenges posed to historical queering and that scholars must take an inventive approach to overcome them.

  9. Improved Discontinuity-capturing Finite Element Techniques for Reaction Effects in Turbulence Computation

    Science.gov (United States)

    Corsini, A.; Rispoli, F.; Santoriello, A.; Tezduyar, T. E.

    2006-09-01

    Recent advances in turbulence modeling brought more and more sophisticated turbulence closures (e.g. k-ɛ, k-ɛ - v 2- f, Second Moment Closures), where the governing equations for the model parameters involve advection, diffusion and reaction terms. Numerical instabilities can be generated by the dominant advection or reaction terms. Classical stabilized formulations such as the Streamline Upwind/Petrov Galerkin (SUPG) formulation (Brook and Hughes, comput methods Appl Mech Eng 32:199 255, 1982; Hughes and Tezduyar, comput methods Appl Mech Eng 45: 217 284, 1984) are very well suited for preventing the numerical instabilities generated by the dominant advection terms. A different stabilization however is needed for instabilities due to the dominant reaction terms. An additional stabilization term, called the diffusion for reaction-dominated (DRD) term, was introduced by Tezduyar and Park (comput methods Appl Mech Eng 59:307 325, 1986) for that purpose and improves the SUPG performance. In recent years a new class of variational multi-scale (VMS) stabilization (Hughes, comput methods Appl Mech Eng 127:387 401, 1995) has been introduced, and this approach, in principle, can deal with advection diffusion reaction equations. However, it was pointed out in Hanke (comput methods Appl Mech Eng 191:2925 2947) that this class of methods also need some improvement in the presence of high reaction rates. In this work we show the benefits of using the DRD operator to enhance the core stabilization techniques such as the SUPG and VMS formulations. We also propose a new operator called the DRDJ (DRD with the local variation jump) term, targeting the reduction of numerical oscillations in the presence of both high reaction rates and sharp solution gradients. The methods are evaluated in the context of two stabilized methods: the classical SUPG formulation and a recently-developed VMS formulation called the V-SGS (Corsini et al. comput methods Appl Mech Eng 194:4797 4823, 2005

  10. Berta and Adelaide: the policy of consolidation of the royal power of Hugh of Arles

    Directory of Open Access Journals (Sweden)

    Giacomo Vignodelli

    2012-10-01

    Full Text Available Both twins dowers made up by Hugh of Provence, king of Italic Kingdom, for his betrothal with Berta and that of his son Lothair with Adelaide, have to be understood within the policy of strengthening the royal power led by the king in the Thirties of the Tenth century: the dowers are in fact the crown of that policy. The double dower constitution supplies the king of Italy the opportunity to reserve for himself (and for the new formed young royal couple a strong control of the center of the Po valley around Pavia and dismantles (for the benefit of the royal domain ducal bases of power in Tuscia. The analysis of the work of Hugh allows us to understand its policy towards the aristocracy and the mechanisms for the promotion of new families (Aleramici, Obertenghi, Canossa. The original documents of dower were kept in the monastery of St. Salvatore of Pavia, founded by Adelaide; this locations isn’t due to the goods endowment of the monastery but to their importance as instrument of political legitimacy of the new Ottonian royal family.

  11. Frequency Effect of Harmonic Noise on the FitzHugh-Nagumo Neuron Model

    Institute of Scientific and Technical Information of China (English)

    宋艳丽

    2011-01-01

    Using harmonic noise,the frequency effect of noise on the FitzHugh-Nagumo neuron model is investigated.The results show that the neuron has a resonance characteristic and responds strongly to the noise with a certain frequency at fixed power.Driven by the noise with this frequency,the train is most regular and the coefficient of variation R has a minimum.The imperfect synchronization takes place,which,however,is optimal only for noise with an appropriate frequency.It is shown that there exists coherence resonance related to frequency.%Using harmonic noise, the frequency effect of noise on the FitzHugh-Nagumo neuron model is investigated. The results show that the neuron has a resonance characteristic and responds strongly to the noise with a certain frequency at fixed power. Driven by the noise with this frequency, the train is most regular and the coefficient of variation R has a minimum. The imperfect synchronization takes place, which, however, is optimal only for noise with an appropriate frequency. It is shown that there exists coherence resonance related to frequency.

  12. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  13. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  14. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  15. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton‐Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on‐line numerical computations. Based on the decomposition approach and cross‐product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on‐line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed‐chain robot systems.

  16. Progress Towards Computational Method for Circulation Control Airfoils

    Science.gov (United States)

    Swanson, R. C.; Rumsey, C. L.; Anders, S. G.

    2005-01-01

    The compressible Reynolds-averaged Navier-Stokes equations are solved for circulation control airfoil flows. Numerical solutions are computed with both structured and unstructured grid solvers. Several turbulence models are considered, including the Spalart-Allmaras model with and without curvature corrections, the shear stress transport model of Menter, and the k-enstrophy model. Circulation control flows with jet momentum coefficients of 0.03, 0.10, and 0.226 are considered. Comparisons are made between computed and experimental pressure distributions, velocity profiles, Reynolds stress profiles, and streamline patterns. Including curvature effects yields the closest agreement with the measured data.

  17. Method for simulating paint mixing on computer monitors

    Science.gov (United States)

    Carabott, Ferdinand; Lewis, Garth; Piehl, Simon

    2002-06-01

    Computer programs like Adobe Photoshop can generate a mixture of two 'computer' colors by using the Gradient control. However, the resulting colors diverge from the equivalent paint mixtures in both hue and value. This study examines why programs like Photoshop are unable to simulate paint or pigment mixtures, and offers a solution using Photoshops existing tools. The article discusses how a library of colors, simulating paint mixtures, is created from 13 artists' colors. The mixtures can be imported into Photoshop as a color swatch palette of 1248 colors and as 78 continuous or stepped gradient files, all accessed in a new software package, Chromafile.

  18. SÍNDROME DE FITZ-HUGH-CURTIS COMO HALLAZGO DURANTE CIRUGÍA GINECOLÓGICA

    OpenAIRE

    Ricci A,Paolo; Solà D,Vicente; Pardo S,Jack

    2009-01-01

    El síndrome de Fitz-Hugh-Curtis es una perihepatitis producida por una peritonitis secundaria al ascenso de bacterias, como resultado de una enfermedad inflamatoria pélvica. En la etapa crónica se pueden observar adherencias entre la pared abdominal y la superficie hepática, caracterizadas por la semejanza a "cuerdas de violín". Esta imagen es considerada criterio diagnóstico. Se presenta un caso de hallazgo de síndrome de Fitz-Hugh-Curtis, pesquisado durante la inspección rutinaria de la cav...

  19. SÍNDROME DE FITZ-HUGH-CURTIS COMO HALLAZGO DURANTE CIRUGÍA GINECOLÓGICA

    OpenAIRE

    Ricci A,Paolo; Solà D,Vicente; Pardo S,Jack

    2009-01-01

    El síndrome de Fitz-Hugh-Curtis es una perihepatitis producida por una peritonitis secundaria al ascenso de bacterias, como resultado de una enfermedad inflamatoria pélvica. En la etapa crónica se pueden observar adherencias entre la pared abdominal y la superficie hepática, caracterizadas por la semejanza a "cuerdas de violín". Esta imagen es considerada criterio diagnóstico. Se presenta un caso de hallazgo de síndrome de Fitz-Hugh-Curtis, pesquisado durante la inspección rutinaria de la cav...

  20. A new similarity computing method based on concept similarity in Chinese text processing

    Institute of Scientific and Technical Information of China (English)

    PENG Jing; YANG DongQing; TANG ShiWei; WANG TengJiao; GAO Jun

    2008-01-01

    The paper proposes a new text similarity computing method based on concept similarity in Chinese text processing. The new method converts text to words vec-tor space modet al first, and then splits words into a set of concepts. Through computing the inner products between concepts, it obtains the similarity between words. The new method computes the similarity of text based on the similarity of words at last. The contributions of the paper include: 1) propose a new computing formula between words; 2) propose a new text similarity computing method based on words similarity; 3) successfully use the method in the application of similarity computing of WEB news; and 4) prove the validity of the method through extensive experiments.

  1. Computer System Reliability Allocation Method and Supporting Tool

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a computer system reliability allocationmethod that is based on the theory of statistic and Markovian chain,which can be used to allocate reliability to subsystem, to hybrid system and software modules. Arele vant supporting tool built by us is introduced.

  2. Adaptive Methods and Parallel Computation for Partial Differential Equations

    Science.gov (United States)

    1992-05-01

    E. Batcher, W. C. Meilander, and J. L. Potter, Eds ., Proceedings of the Inter- national Conference on Parallel Processing, Computer Society Press...11. P. L. Baehmann, S. L. Wittchen , M. S. Shephard, K. R. Grice, and M. A. Yerry, Robust, geometrically based, automatic two-dimensional mesh

  3. Forest Fire History... A Computer Method of Data Analysis

    Science.gov (United States)

    Romain M. Meese

    1973-01-01

    A series of computer programs is available to extract information from the individual Fire Reports (U.S. Forest Service Form 5100-29). The programs use a statistical technique to fit a continuous distribution to a set of sampled data. The goodness-of-fit program is applicable to data other than the fire history. Data summaries illustrate analysis of fire occurrence,...

  4. Applied Nonlinear Dynamics Analytical, Computational, and Experimental Methods

    CERN Document Server

    Nayfeh, Ali H

    1995-01-01

    A unified and coherent treatment of analytical, computational and experimental techniques of nonlinear dynamics with numerous illustrative applications. Features a discourse on geometric concepts such as Poincaré maps. Discusses chaos, stability and bifurcation analysis for systems of differential and algebraic equations. Includes scores of examples to facilitate understanding.

  5. Multigrid Methods on Parallel Computers: A Survey on Recent Developments

    Science.gov (United States)

    1990-12-01

    multi- color (red-black, four color etc.) order- ing of the grid points. Clearly, computation of defects, interpolation and restriction can be also...73716 72555 .984 85750 82919 95800 85206 .889 113086 97406 16406 16383 .999 22042 21845 23024 21853 .949 31668 29143 Table 6: Evaluated time

  6. Wavelet-based method for computing elastic band gaps of one-dimensional phononic crystals

    Institute of Scientific and Technical Information of China (English)

    YAN; ZhiZhong; WANG; YueSheng

    2007-01-01

    A wavelet-based method was developed to compute elastic band gaps of one-dimensional phononic crystals. The wave field was expanded in the wavelet basis and an equivalent eigenvalue problem was derived in a matrix form involving the adaptive computation of integrals of the wavelets. The method was then applied to a binary system. For comparison, the elastic band gaps of the same one-di- mensional phononic crystals computed with the wavelet method and the well- known plane wave expansion (PWE) method are both presented in this paper. The numerical results of the two methods are in good agreement while the computation costs of the wavelet method are much lower than that of PWE method. In addition, the adaptability of wavelets makes the method possible for efficient band gap computation of more complex phononic structures.

  7. A new computer method for temperature measurement based on an optimal control problem

    NARCIS (Netherlands)

    Damean, N.; Houkes, Z.; Regtien, P.P.L.

    1996-01-01

    A new computer method to measure extreme temperatures is presented. The method reduces the measurement of the unknown temperature to the solving of an optimal control problem, using a numerical computer. Based on this method, a new device for temperature measurement is built. It consists of a hardwa

  8. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  9. 34 CFR 682.304 - Methods for computing interest benefits and special allowance.

    Science.gov (United States)

    2010-07-01

    ...) PROGRAM Federal Payments of Interest and Special Allowance § 682.304 Methods for computing interest..., September 30, and December 31 of each year. A lender may use either the average daily balance method or the... shall use the average daily balance method to determine the balance on which the Secretary computes...

  10. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  11. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    Science.gov (United States)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  12. 3D modeling method for computer animate based on modified weak structured light method

    Science.gov (United States)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  13. Dynamic Factor Method of Computing Dynamic Mathematical Model for System Simulation

    Institute of Scientific and Technical Information of China (English)

    老大中; 吴娟; 杨策; 蒋滋康

    2003-01-01

    The computational methods of a typical dynamic mathematical model that can describe the differential element and the inertial element for the system simulation are researched. The stability of numerical solutions of the dynamic mathematical model is researched. By means of theoretical analysis, the error formulas, the error sign criteria and the error relationship criterion of the implicit Euler method and the trapezoidal method are given, the dynamic factor affecting the computational accuracy has been found, the formula and the methods of computing the dynamic factor are given. The computational accuracy of the dynamic mathematical model like this can be improved by use of the dynamic factor.

  14. Computational Nuclear Physics and Post Hartree-Fock Methods

    CERN Document Server

    Lietz, Justin; Jansen, Gustav R; Hagen, Gaute; Hjorth-Jensen, Morten

    2016-01-01

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions on strategies for porting the code to present and planned high-performance computing facilities.

  15. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  16. A theoretical method for assessing disruptive computer viruses

    Science.gov (United States)

    Wu, Yingbo; Li, Pengdeng; Yang, Lu-Xing; Yang, Xiaofan; Tang, Yuan Yan

    2017-09-01

    To assess the prevalence of disruptive computer viruses in the situation that every node in a network has its own virus-related attributes, a heterogeneous epidemic model is proposed. A criterion for the global stability of the virus-free equilibrium and a criterion for the existence of a unique viral equilibrium are given, respectively. Furthermore, extensive simulation experiments are conducted, and some interesting phenomena are found from the experimental results. On this basis, some policies of suppressing disruptive viruses are recommended.

  17. Computational Methods for Aerodynamic Design (Inverse) and Optimization

    Science.gov (United States)

    1990-01-01

    small local supersonic flow field has to be extended jFiq. 18), using a higher contour curva - ture near the end of the supersonic region and...TT - Total temperature AP - Computed changa in design paraneters x - NACAO012 airfoil axial coordinate xn - Position vectors of Bezier control points...x(u) - Bernstein- Bezier polynomial x(u v,w) Three-dimensional Bezier polynomial y(x) = Function which defines a NACAOO12 airfoil 1.0 INTRODUCTION

  18. Computational Systems Biology in Cancer: Modeling Methods and Applications

    Directory of Open Access Journals (Sweden)

    Wayne Materi

    2007-01-01

    Full Text Available In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy.

  19. Multiscale stochastic finite element method on random field modeling of geotechnical problems- a fast computing procedure

    Institute of Scientific and Technical Information of China (English)

    Xi F. XU

    2015-01-01

    The Green-function-based multiscale stochastic finite element method (MSFEM) has been formulated based on the stochastic variational principle. In this study a fast computing procedure based on the MSFEM is developed to solve random field geotechnical problems with a typical coefficient of variance less than 1. A unique fast computing advantage of the procedure enables computation performed only on those locations of interest, therefore saving a lot of computation. The numerical example on soil settlement shows that the procedure achieves significant computing efficiency compared with Monte Carlo method.

  20. Reflections about Research in Computer Science regarding the Classification of Sciences and the Scientific Method

    Directory of Open Access Journals (Sweden)

    WAZLAWICK, R. S.

    2010-12-01

    Full Text Available This paper presents some observations about Computer Science and the Scientific Method. Initially, the paper discusses the different aspects of Computer Science regarding the classification of sciences. It is observed that different areas inside Computer Science can be classified as different Sciences. The paper presents the main philosophical schools that define what is understood as the Scientific Method, and their influence on Computer Science. Finally, the paper discusses the distinction between Science and Technology and the degrees of maturity in Computer Science research.

  1. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  2. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    Energy Technology Data Exchange (ETDEWEB)

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  3. FAST PARALLELIZABLE METHODS FOR COMPUTING INVARIANT SUBSPACES OF HERMITIAN MATRICES

    Institute of Scientific and Technical Information of China (English)

    Zhenyue Zhang; Hongyuan Zha; Wenlong Ying

    2007-01-01

    We propose a quadratically convergent algorithm for computing the invariant subspaces of an Hermitian matrix.Each iteration of the algorithm consists of one matrix-matrix multiplication and one QR decomposition.We present an accurate convergence analysis of the algorithm without using the big O notation.We also propose a general framework based on implicit rational transformations which allows us to make connections with several existing algorithms and to derive classes of extensions to our basic algorithm with faster convergence rates.Several numerical examples are given which compare some aspects of the existing algorithms and the new Mgorithms.

  4. Introduction to Computational Methods for Stability and Control (COMSAC)

    Science.gov (United States)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  5. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  6. Advanced Computational Methods for Thermal Radiative Heat Transfer.

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  7. Predicted PAR1 inhibitors from multiple computational methods

    Science.gov (United States)

    Wang, Ying; Liu, Jinfeng; Zhu, Tong; Zhang, Lujia; He, Xiao; Zhang, John Z. H.

    2016-08-01

    Multiple computational approaches are employed in order to find potentially strong binders of PAR1 from the two molecular databases: the Specs database containing more than 200,000 commercially available molecules and the traditional Chinese medicine (TCM) database. By combining the use of popular docking scoring functions together with detailed molecular dynamics simulation and protein-ligand free energy calculations, a total of fourteen molecules are found to be potentially strong binders of PAR1. The atomic details in protein-ligand interactions of these molecules with PAR1 are analyzed to help understand the binding mechanism which should be very useful in design of new drugs.

  8. A method of computational magnetohydrodynamics defining stable Scyllac equilibria.

    Science.gov (United States)

    Betancourt, O; Garabedian, P

    1977-02-01

    A computer code has been developed for the numerical calculation of sharp boundary equilibria of a toroidal plasma with diffuse pressure profile. This generalizes earlier work that was done separately on the sharp boundary and diffuse models, and it allows for large amplitude distortions of the plasma in three-dimensional space. By running the code, equilibria that are stable to the so-called m = 1, k = 0 mode have been found for Scyllac, which is a high beta toroidal confinement device of very large aspect ratio.

  9. Lattice QCD computations: Recent progress with modern Krylov subspace methods

    Energy Technology Data Exchange (ETDEWEB)

    Frommer, A. [Bergische Universitaet GH Wuppertal (Germany)

    1996-12-31

    Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.

  10. Advanced Computational Methods for Thermal Radiative Heat Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  11. Computational method validation: An OECD (Organization for Economic Cooperation and Development) working group report

    Energy Technology Data Exchange (ETDEWEB)

    Whitesides, G.E.

    1987-01-01

    Representatives from eleven Organization for Economic Co-operation and Development member countries participated in an exercise to validate computer calculations to evaluate the criticality safety for several fissile material transport and handling situations. A procedure evolved from this work which has been shown to demonstrate whether a given computation method produces ''valid'' results. This procedure is expected to provide a basis for acceptance of computational results on an international basis by regulatory authorities through the comparison of methods used by the various countries. This work will also provide the framework for validating computational methods for other applications such as heat transfer and neutron/gamma shielding.

  12. Adaptive computational methods for SSME internal flow analysis

    Science.gov (United States)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  13. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    Science.gov (United States)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  14. A low computation cost method for seizure prediction.

    Science.gov (United States)

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction.

  15. University receives $1.4 million science education award from Howard Hughes Medical Institute to support innovate undergraduate science education

    OpenAIRE

    Owczarski, Mark

    2010-01-01

    Virginia Tech was among 50 top research universities nationwide to receive a Precollege and Undergraduate Science Education Program award from the Howard Hughes Medical Institute (HHMI) that will be used to encourage university faculty to develop new ways to teach and inspire undergraduate students about science and research.

  16. Pretext, Context, Subtext: Textual Power in the Writing of Langston Hughes, Richard Wright, and Martin Luther King, Jr.

    Science.gov (United States)

    Bogumil, Mary L.; Molino, Michael R.

    1990-01-01

    Studies verbal pretexts, social subtexts, and interpretive contexts of works by Langston Hughes, Richard Wright, and Martin Luther King, Jr. Notes that cultural repression is propagated (and dispelled) in part through the power of language. Notes that these texts are relevant for teaching textual power in hopes of affecting social change. (RS)

  17. Computational method for general multicenter electronic structure calculations.

    Science.gov (United States)

    Batcho, P F

    2000-06-01

    Here a three-dimensional fully numerical (i.e., chemical basis-set free) method [P. F. Batcho, Phys. Rev. A 57, 6 (1998)], is formulated and applied to the calculation of the electronic structure of general multicenter Hamiltonian systems. The numerical method is presented and applied to the solution of Schrödinger-type operators, where a given number of nuclei point singularities is present in the potential field. The numerical method combines the rapid "exponential" convergence rates of modern spectral methods with the multiresolution flexibility of finite element methods, and can be viewed as an extension of the spectral element method. The approximation of cusps in the wave function and the formulation of multicenter nuclei singularities are efficiently dealt with by the combination of a coordinate transformation and a piecewise variational spectral approximation. The complete system can be efficiently inverted by established iterative methods for elliptical partial differential equations; an application of the method is presented for atomic, diatomic, and triatomic systems, and comparisons are made to the literature when possible. In particular, local density approximations are studied within the context of Kohn-Sham density functional theory, and are presented for selected subsets of atomic and diatomic molecules as well as the ozone molecule.

  18. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

  19. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    National Research Council Canada - National Science Library

    Ching-Long Shih; Wen-Yo Lee; Chia-Pin Wu

    2012-01-01

    This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method...

  20. The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

    Science.gov (United States)

    Beltran, Adriana; Salvador, James

    1997-01-01

    In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

  1. Theoretical analysis and control results for the FitzHugh-Nagumo equation

    Directory of Open Access Journals (Sweden)

    Marko Antonio Rojas-Medar

    2008-12-01

    Full Text Available In this paper we are concerned with some theoretical questions for the FitzHugh-Nagumo equation. First, we recall the system, we briefly explain the meaning of the variables and we present a simple proof of the existence and uniqueness of strong solution. We also consider an optimal control problem for this system. In this context, the goal is to determine how can we act on the system in order to get good properties. We prove the existence of optimal state-control pairs and, as an application of the Dubovitski-Milyoutin formalism, we deduce the corresponding optimality system. We also connect the optimal control problem with a controllability question and we construct a sequence of controls that produce solutions that converge strongly to desired states. This provides a strategy to make the system behave as desired. Finally, we present some open questions related to the control of this equation.

  2. A correspondence between the models of Hodgkin-Huxley and FitzHugh-Nagumo revisited

    Science.gov (United States)

    Postnikov, Eugene B.; Titkova, Olga V.

    2016-11-01

    We present the discussion on the possibility to scale the classical dimensionless FitzHugh-Nagumo model of neuronal self-sustained oscillations to the range of variables corresponding to the results, which are provided by the biophysically relevant reduced two-dimensional Hodgkin-Huxley equations (the Rinzel model). It is shown that there exists a relatively simple choice of affine transformation, which results in time-dependent solutions, which reproduce with a high accuracy the time course of the recovery variable and the sharp onsets (intervals of fast motions on a phase trajectories) of the voltage spikes. As for the latter, the reasons for unavoidable difference are discussed as well as a necessity of taking into account applied current values during such a scaling procedure.

  3. Phase Propagations in a Coupled Oscillator-Excitor System of FitzHugh-Nagumo Models

    Institute of Scientific and Technical Information of China (English)

    ZHOU Lu-Qun; OUYANG Qi

    2006-01-01

    @@ A one-dimensional array of 2N + 1 automata with FitzHugh-Nagumo dynamics, in which one is set to be oscillatory and the others are excitable, is investigated with bi-directional interactions. We find that 1 : 1 rhythm propagation in the array depends on the appropriate couple strength and the excitability of the system. On the two sides of the 1: 1 rhythm area in parameter space, two different kinds of dynamical behaviour of the pacemaker, i.e. phase-locking phenomena and canard-like phenomena, are shown. The latter is found in company with chaotic pattern and period doubling bifurcation. When the coupling strength is larger than a critical value,the whole system ends to a steady state.

  4. Fitz-Hugh-Curtis syndrome lacking typical characteristics of pelvic inflammatory disease.

    Science.gov (United States)

    Mitaka, Hayato; Kitazono, Hidetaka; Deshpande, Gautam A; Hiraoka, Eiji

    2016-06-22

    A 23-year-old Japanese woman, previously a commercial sex worker, presented with a 2-day history of right upper quadrant (RUQ) abdominal pain, worse on deep inspiration. She had noticed increased vaginal discharge 2 months earlier and had developed dull, lower abdominal pain 3 weeks prior to presentation. Although pelvic examination and transvaginal ultrasonography revealed neither a tubal nor ovarian pathology, abdominal CT scan with contrast demonstrated early enhancement of the hepatic capsule, a finding pathognomonic for Fitz-Hugh-Curtis syndrome (FHCS). Cervical discharge PCR assay confirmed Chlamydia trachomatis infection. This case highlights that normal gynaecological evaluation may be insufficient to rule out FHCS, for which physicians should have a high index of suspicion when seeing any woman of reproductive age with RUQ pain. 2016 BMJ Publishing Group Ltd.

  5. The African-American grandmother in autobiographical works by Frederick Douglass, Langston Hughes, and Maya Angelou.

    Science.gov (United States)

    Hill-Lubin, M A

    1991-01-01

    Using the autobiographies of Frederick Douglass, Langston Hughes, and Maya Angelou, this article demonstrates that the portrait of the African-American grandmother is one of action, involvement, hope, and dignity. In examining the works, we observe her functioning in three areas: as the preserver and most tenacious survivor of the African extended family; second, as repository and distributor of the family history, wisdom, and black lore; this role places her at the foundation of the Black, oral and written, literary and creative traditions; and third, as the retainer and transmitter of values and ideals that support and enhance her humanity, her family, and her community. This function emphasizes her spirituality. It is suggested that the grandmother, having played an important role in the growth, development, and artistic flowering of the autobiographer, can become a model and source of empowerment for future generations.

  6. A filtered convolution method for the computation of acoustic wave fields in very large spatiotemporal domains

    NARCIS (Netherlands)

    Verweij, M.D.; Huijssen, J.

    2009-01-01

    The full-wave computation of transient acoustic fields with sizes in the order of 100x100x100 wavelengths by 100 periods requires a numerical method that is extremely efficient in terms of storage and computation. Iterative integral equation methods offer a good performance on these points, provided

  7. The finite difference time domain method on a massively parallel computer

    NARCIS (Netherlands)

    Ewijk, L.J. van

    1996-01-01

    At the Physics and Electronics Laboratory TNO much research is done in the field of computational electromagnetics (CEM). One of the tools in this field is the Finite Difference Time Domain method (FDTD), a method that has been implemented in a program in order to be able to compute electromagnetic

  8. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past...

  9. New method for computer numerical control machine tool calibration: Relay method

    Institute of Scientific and Technical Information of China (English)

    LIU Huanlao; SHI Hanming; LI Bin; ZHOU Huichen

    2007-01-01

    Relay measurement method,which uses the kilogram-meter (KGM) measurement system to identify volumetric errors on the planes of computer numerical con trol (CNC) machine tools,is verified through experimental tests.During the process,all position errors on the entire plane table are measured by the equipment,which is limited to a small field.All errors are obtained first by measuring the error of the basic position near the original point.On the basis of that positional error,the positional errors far away from the original point are measured.Using this analogy,the error information on the positional points on the entire plane can be obtained.The process outlined above is called the relay meth od.Test results indicate that the accuracy and repeatability are high,and the method can be used to calibrate geometric errors on the plane of CNC machine tools after backlash errors have been well compensated.

  10. A computational method for recording and analysis of mandibular movements

    Directory of Open Access Journals (Sweden)

    Alan Petrônio Pinheiro

    2008-10-01

    Full Text Available This study proposed the development of a new clinical tool capable of quantifying the movements of opening-closing, protrusion and laterotrusion of the mandible. These movements are important for the clinical evaluation of the temporomandibular function and muscles involved in mastication. Unlike current commercial systems, the proposed system employs a low-cost video camera and a computer program that is used for reconstructing the trajectory of a reflective marker that is fixed on the mandible. In order to illustrate the clinical application of this tool, a clinical experiment consisting on the evaluation of the mandibular movements of 12 subjects was conducted. The results of this study were compatible with those found in the literature with the advantage of using a low cost, simple, non-invasive, and flexible tool customized for the needs of the practical clinic.

  11. Computer capillaroscopy as a new cardiological diagnostics method

    Science.gov (United States)

    Gurfinkel, Youri I.; Korol, Oleg A.; Kufal, George E.

    1998-04-01

    The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

  12. Simple and fast method for step size determination in computations of signal propagation through nonlinear fibres

    DEFF Research Database (Denmark)

    Rasmussen, Christian Jørgen

    2001-01-01

    Presents a simple and fast method for determination of the step size that exactly leads to a prescribed accuracy when signal propagation through nonlinear optical fibres is computed using the split-step Fourier method.......Presents a simple and fast method for determination of the step size that exactly leads to a prescribed accuracy when signal propagation through nonlinear optical fibres is computed using the split-step Fourier method....

  13. Improved fixed point iterative method for blade element momentum computations

    DEFF Research Database (Denmark)

    Sun, Zhenye; Shen, Wen Zhong; Chen, Jin

    2017-01-01

    to the physical solution, especially for the locations near the blade tip and root where the failure rate of the iterative method is high. The stability and accuracy of aerodynamic calculations and optimizations are greatly reduced due to this problem. The intrinsic mechanisms leading to convergence problems......The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...

  14. Nonlinear system identification with global and local soft computing methods

    Energy Technology Data Exchange (ETDEWEB)

    Runkler, T.A. [Siemens AG, Muenchen (Germany). Zentralabt. Technik Information und Kommunikation

    2000-10-01

    An important step in the design of control systems is system identification. Data driven system identification finds functional models for the system's input output behavior. Regression methods are simple and effective, but may cause overshoots for complicated characteristics. Neural network approaches such as the multilayer perceptron yield very accurate models, but are black box approaches which leads to problems in system and stability analysis. In contrast to these global modeling methods crisp and fuzzy rule bases represent local models that can be extracted from data by clustering methods. Depending on the type and number of models different degrees of model accuracy can be achieved. (orig.)

  15. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  16. Lyapunov Computational Method for Two-Dimensional Boussinesq Equation

    CERN Document Server

    Mabrouk, Anouar Ben

    2010-01-01

    A numerical method is developed leading to Lyapunov operators to approximate the solution of two-dimensional Boussinesq equation. It consists of an order reduction method and a finite difference discretization. It is proved to be uniquely solvable and analyzed for local truncation error for consistency. The stability is checked by using Lyapunov criterion and the convergence is studied. Some numerical implementations are provided at the end of the paper to validate the theoretical results.

  17. COMPUTER COMPUTATION OF THE METHOD OF MULTIPLE SCALES-DIRICHLET PROBLEM FOR A CLASS OF SYSTEM OF NONLINEAR DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    谢腊兵; 江福汝

    2003-01-01

    The method of boundary layer with multiple scales and computer algebra were applied to study the asymptotic behavior of solution of boundary value problems for a class of system of nonlinear differential equations. The asymptotic expansions of solution were constructed. The remainders were estimated. And an example was analysed. It provides a new foreground for the application of the method of boundary layer with multiple scales.

  18. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    Science.gov (United States)

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified.

  19. A comparison of phylogenetic network methods using computer simulation.

    Directory of Open Access Journals (Sweden)

    Steven M Woolley

    Full Text Available BACKGROUND: We present a series of simulation studies that explore the relative performance of several phylogenetic network approaches (statistical parsimony, split decomposition, union of maximum parsimony trees, neighbor-net, simulated history recombination upper bound, median-joining, reduced median joining and minimum spanning network compared to standard tree approaches, (neighbor-joining and maximum parsimony in the presence and absence of recombination. PRINCIPAL FINDINGS: In the absence of recombination, all methods recovered the correct topology and branch lengths nearly all of the time when the substitution rate was low, except for minimum spanning networks, which did considerably worse. At a higher substitution rate, maximum parsimony and union of maximum parsimony trees were the most accurate. With recombination, the ability to infer the correct topology was halved for all methods and no method could accurately estimate branch lengths. CONCLUSIONS: Our results highlight the need for more accurate phylogenetic network methods and the importance of detecting and accounting for recombination in phylogenetic studies. Furthermore, we provide useful information for choosing a network algorithm and a framework in which to evaluate improvements to existing methods and novel algorithms developed in the future.

  20. Computational methods to compute wavefront error due to aero-optic effects

    Science.gov (United States)

    Genberg, Victor; Michels, Gregory; Doyle, Keith; Bury, Mark; Sebastian, Thomas

    2013-09-01

    Aero-optic effects can have deleterious effects on high performance airborne optical sensors that must view through turbulent flow fields created by the aerodynamic effects of windows and domes. Evaluating aero-optic effects early in the program during the design stages allows mitigation strategies and optical system design trades to be performed to optimize system performance. This necessitates a computationally efficient means to evaluate the impact of aero-optic effects such that the resulting dynamic pointing errors and wavefront distortions due to the spatially and temporally varying flow field can be minimized or corrected. To this end, an aero-optic analysis capability was developed within the commercial software SigFit that couples CFD results with optical design tools. SigFit reads the CFD generated density profile using the CGNS file format. OPD maps are then created by converting the three-dimensional density field into an index of refraction field and then integrating along specified paths to compute OPD errors across the optical field. The OPD maps may be evaluated directly against system requirements or imported into commercial optical design software including Zemax® and Code V® for a more detailed assessment of the impact on optical performance from which design trades may be performed.

  1. Quasi-Hamiltonian Method for Computation of Decoherence Rates

    CERN Document Server

    Joynt, Robert; Wang, Qiang-Hua

    2009-01-01

    We present a general formalism for the dissipative dynamics of an arbitrary quantum system in the presence of a classical stochastic process. It is applicable to a wide range of physical situations, and in particular it can be used for qubit arrays in the presence of classical two-level systems (TLS). In this formalism, all decoherence rates appear as eigenvalues of an evolution matrix. Thus the method is linear, and the close analogy to Hamiltonian systems opens up a toolbox of well-developed methods such as perturbation theory and mean-field theory. We apply the method to the problem of a single qubit in the presence of TLS that give rise to pure dephasing 1/f noise and solve this problem exactly. The exact solution gives an experimentally observable improvement over the popular Gaussian approximation.

  2. Computation of nonuniform transmission lines using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, G.C.; Paulino, J.O.S. [Universidade Federal de Minas Gerais, Belo Horizonte, MG (Brazil). School of Engineering

    1997-12-31

    Calculation of lightning overvoltages on transmission lines has been described. Lightning induced overvoltages are of great significance under certain conditions because of the main characteristics of the phenomena. The lightning channel model is one of the most important parameters essential to obtaining the generated electromagnetic fields. In this study, nonuniform transmission line equations were solved using the finite difference method and the leap-frog scheme, the Finite Difference Time Domain (FDTD) method. The subroutine was interfaced with the Electromagnetic Transients Program (EMTP). Two models were used to represent the characteristic impedance of the nonuniform lines used to model the transmission line towers and the lightning main channel. The advantages of the FDTD method was the much smaller code and faster processing time. 35 refs., 5 figs.

  3. Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface.

    Science.gov (United States)

    Zhao, Yu; Piao, Mei-lan; Li, Gang; Kim, Nam

    2015-07-01

    Fast calculation method for a computer-generated cylindrical hologram (CGCH) is proposed. The method consists of two steps: the first step is a calculation of a virtual wave-front recording surface (WRS), which is located between the 3D object and CGCH. In the second step, in order to obtain a CGCH, we execute the diffraction calculation based on the fast Fourier transform (FFT) from the WRS to the CGCH, which are in the same concentric arrangement. The computational complexity is dramatically reduced in comparison with direct integration method. The simulation results confirm that our proposed method is able to improve the computational speed of CGCH.

  4. Translation Method and Computer Programme for Assisting the Same

    DEFF Research Database (Denmark)

    2013-01-01

    The present invention relates to a translation method comprising the steps of: a translator speaking a translation of a written source text in a target language, an automatic speech recognition system converting the spoken translation into a set of phone and word hypotheses in the target language......, a machine translation system translating the written source text into a set of translations hypotheses in the target language, and an integration module combining the set of spoken word hypotheses and the set of machine translation hypotheses obtaining a text in the target language. Thereby obtaining...... a significantly improved and faster translation compared to what is achieved by known translation methods....

  5. Computational Biology Methods for Characterization of Pluripotent Cells.

    Science.gov (United States)

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  6. Modern Electrophysiological Methods for Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Rolando Grave de Peralta Menendez

    2007-01-01

    Full Text Available Modern electrophysiological studies in animals show that the spectrum of neural oscillations encoding relevant information is broader than previously thought and that many diverse areas are engaged for very simple tasks. However, EEG-based brain-computer interfaces (BCI still employ as control modality relatively slow brain rhythms or features derived from preselected frequencies and scalp locations. Here, we describe the strategy and the algorithms we have developed for the analysis of electrophysiological data and demonstrate their capacity to lead to faster accurate decisions based on linear classifiers. To illustrate this strategy, we analyzed two typical BCI tasks. (1 Mu-rhythm control of a cursor movement by a paraplegic patient. For this data, we show that although the patient received extensive training in mu-rhythm control, valuable information about movement imagination is present on the untrained high-frequency rhythms. This is the first demonstration of the importance of high-frequency rhythms in imagined limb movements. (2 Self-paced finger tapping task in three healthy subjects including the data set used in the BCI-2003 competition. We show that by selecting electrodes and frequency ranges based on their discriminative power, the classification rates can be systematically improved with respect to results published thus far.

  7. Computational method for calligraphic style representation and classification

    Science.gov (United States)

    Zhang, Xiafen; Nagy, George

    2015-09-01

    A large collection of reproductions of calligraphy on paper was scanned into images to enable web access for both the academic community and the public. Calligraphic paper digitization technology is mature, but technology for segmentation, character coding, style classification, and identification of calligraphy are lacking. Therefore, computational tools for classification and quantification of calligraphic style are proposed and demonstrated on a statistically characterized corpus. A subset of 259 historical page images is segmented into 8719 individual character images. Calligraphic style is revealed and quantified by visual attributes (i.e., appearance features) of character images sampled from historical works. A style space is defined with the features of five main classical styles as basis vectors. Cross-validated error rates of 10% to 40% are reported on conventional and conservative sampling into training/test sets and on same-work voting with a range of voter participation. Beyond its immediate applicability to education and scholarship, this research lays the foundation for style-based calligraphic forgery detection and for discovery of latent calligraphic groups induced by mentor-student relationships.

  8. NMR quantum computing: applying theoretical methods to designing enhanced systems.

    Science.gov (United States)

    Mawhinney, Robert C; Schreckenbach, Georg

    2004-10-01

    Density functional theory results for chemical shifts and spin-spin coupling constants are presented for compounds currently used in NMR quantum computing experiments. Specific design criteria were examined and numerical guidelines were assessed. Using a field strength of 7.0 T, protons require a coupling constant of 4 Hz with a chemical shift separation of 0.3 ppm, whereas carbon needs a coupling constant of 25 Hz for a chemical shift difference of 10 ppm, based on the minimal coupling approximation. Using these guidelines, it was determined that 2,3-dibromothiophene is limited to only two qubits; the three qubit system bromotrifluoroethene could be expanded to five qubits and the three qubit system 2,3-dibromopropanoic acid could also be used as a six qubit system. An examination of substituent effects showed that judiciously choosing specific groups could increase the number of available qubits by removing rotational degeneracies in addition to introducing specific conformational preferences that could increase (or decrease) the magnitude of the couplings. The introduction of one site of unsaturation can lead to a marked improvement in spectroscopic properties, even increasing the number of active nuclei.

  9. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    2011-01-01

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the le

  10. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  11. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt;

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because...

  12. Computing Economies of Scope Using Robust Partial Frontier Nonparametric Methods

    Directory of Open Access Journals (Sweden)

    Pedro Carvalho

    2016-03-01

    Full Text Available This paper proposes a methodology to examine economies of scope using the recent order-α nonparametric method. It allows us to investigate economies of scope by comparing the efficient order-α frontiers of firms that produce two or more goods with the efficient order-α frontiers of firms that produce only one good. To accomplish this, and because the order-α frontiers are irregular, we suggest to linearize them by the DEA estimator. The proposed methodology uses partial frontier nonparametric methods that are more robust than the traditional full frontier methods. By using a sample of 67 Portuguese water utilities for the period 2002–2008 and, also, a simulated sample, we prove the usefulness of the approach adopted and show that if only the full frontier methods were used, they would lead to different results. We found evidence of economies of scope in the provision of water supply and wastewater services simultaneously by water utilities in Portugal.

  13. Using Computers in Relation to Learning Climate in CLIL Method

    Science.gov (United States)

    Binterová, Helena; Komínková, Olga

    2013-01-01

    The main purpose of the work is to present a successful implementation of CLIL method in Mathematics lessons in elementary schools. Nowadays at all types of schools (elementary schools, high schools and universities) all over the world every school subject tends to be taught in a foreign language. In 2003, a document called Action plan for…

  14. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...

  15. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    2011-01-01

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...

  16. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    Science.gov (United States)

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-03-24

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, i.e. prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedrons decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds quiet breathing collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R(2)=0.94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R(2)=0.92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  17. Computer-simulated experiments and computer games: a method of design analysis

    Directory of Open Access Journals (Sweden)

    Jerome J. Leary

    1995-12-01

    Full Text Available Through the new modularization of the undergraduate science degree at the University of Brighton, larger numbers of students are choosing to take some science modules which include an amount of laboratory practical work. Indeed, within energy studies, the fuels and combustion module, for which the computer simulations were written, has seen a fourfold increase in student numbers from twelve to around fifty. Fitting out additional laboratories with new equipment to accommodate this increase presented problems: the laboratory space did not exist; fitting out the laboratories with new equipment would involve a relatively large capital spend per student for equipment that would be used infrequently; and, because some of the experiments use inflammable liquids and gases, additional staff would be needed for laboratory supervision.

  18. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  19. Computational observers and visualization methods for stereoscopic medical imaging.

    Science.gov (United States)

    Zafar, Fahad; Yesha, Yaacov; Badano, Aldo

    2014-09-22

    As stereoscopic display devices become common, their image quality assessment evaluation becomes increasingly important. Most studies conducted on 3D displays are based on psychophysics experiments with humans rating their experience based on detection tasks. The physical measurements do not map to effects on signal detection performance. Additionally, human observer study results are often subjective and difficult to generalize. We designed a computational stereoscopic observer approach inspired by the mechanisms of stereopsis in human vision for task-based image assessment that makes binary decisions based on a set of image pairs. The stereo-observer is constrained to a left and a right image generated using a visualization operator to render voxel datasets. We analyze white noise and lumpy backgrounds using volume rendering techniques. Our simulation framework generalizes many different types of model observers including existing 2D and 3D observers as well as providing flexibility to formulate a stereo model observer approach following the principles of stereoscopic viewing. This methodology has the potential to replace human observer studies when exploring issues with stereo display devices to be used in medical imaging. We show results quantifying the changes in performance when varying stereo angle as measured by an ideal linear stereoscopic observer. Our findings indicate that there is an increase in performance of about 13-18% for white noise and 20-46% for lumpy backgrounds, where the stereo angle is varied from 0 to 30. The applicability of this observer extends to stereoscopic displays used for in the areas of medical and entertainment imaging applications.

  20. Engineering computation of structures the finite element method

    CERN Document Server

    Neto, Maria Augusta; Roseiro, Luis; Cirne, José; Leal, Rogério

    2015-01-01

    This book presents theories and the main useful techniques of the Finite Element Method (FEM), with an introduction to FEM and many case studies of its use in engineering practice. It supports engineers and students to solve primarily linear problems in mechanical engineering, with a main focus on static and dynamic structural problems. Readers of this text are encouraged to discover the proper relationship between theory and practice, within the finite element method: Practice without theory is blind, but theory without practice is sterile. Beginning with elasticity basic concepts and the classical theories of stressed materials, the work goes on to apply the relationship between forces, displacements, stresses and strains on the process of modeling, simulating and designing engineered technical systems. Chapters discuss the finite element equations for static, eigenvalue analysis, as well as transient analyses. Students and practitioners using commercial FEM software will find this book very helpful. It us...

  1. Enhancing Use Case Points Estimation Method Using Soft Computing Techniques

    OpenAIRE

    Nassif, Ali Bou; Capretz, Luiz Fernando; Ho, Danny

    2016-01-01

    Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently,...

  2. Computational intelligent methods for trusting in social networks

    OpenAIRE

    Nuñez González, José David

    2016-01-01

    104 p. This Thesis covers three research lines of Social Networks. The first proposed reseach line is related with Trust. Different ways of feature extraction are proposed for Trust Prediction comparing results with classic methods. The problem of bad balanced datasets is covered in this work. The second proposed reseach line is related with Recommendation Systems. Two experiments are proposed in this work. The first experiment is about recipe generation with a bread machine. The second ex...

  3. Methods for Factor Screening in Computer Simulation Experiments

    Science.gov (United States)

    1979-03-01

    of the dat-a In a-space, impacto the variable selection problem s ign if Lrast ly. S-arch-type variable selection methods include the all-po"sible...i.iv 41.1 ti * n wt- -iu’pt-v c C it st’vt’re mu It ico11 inear it v is pro-crtnt Lind. , ii;.qt4pai tlv * Iti’lt- c- j c. tic j icivnt, art, verv

  4. Numerical methods for computing the temperature distribution in satellite systems

    OpenAIRE

    Gómez-Valadés Maturano, Francisco José

    2012-01-01

    [ANGLÈS] The present thesis has been done at ASTRIUM company to find new methods to obtain temperature distributions. Current software packages such as ESATAN or ESARAD provide not only excellent thermal analysis solutions, at a high price as they are very time consuming though, but also radiative simulations in orbit scenarios. Since licenses of this product are usually limited for the use of many engineers, it is important to provide new tools to do these calculations. In consequence, a dif...

  5. Computing multiple zeros using a class of quartically convergent methods

    Directory of Open Access Journals (Sweden)

    F. Soleymani

    2013-09-01

    For functions with finitely many real roots in an interval, relatively little literature is known, while in applications, the users wish to find all the real zeros at the same time. Hence, the second aim of this paper will be presented by designing a fourth-order algorithm, based on the developed methods, to find all the real solutions of a nonlinear equation in an interval using the programming package Mathematica 8.

  6. Numerical methods for computing the temperature distribution in satellite systems

    OpenAIRE

    Gómez-Valadés Maturano, Francisco José

    2012-01-01

    [ANGLÈS] The present thesis has been done at ASTRIUM company to find new methods to obtain temperature distributions. Current software packages such as ESATAN or ESARAD provide not only excellent thermal analysis solutions, at a high price as they are very time consuming though, but also radiative simulations in orbit scenarios. Since licenses of this product are usually limited for the use of many engineers, it is important to provide new tools to do these calculations. In consequence, a dif...

  7. Computational Simulation of Hypervelocity Penetration Using Adaptive SPH Method

    Institute of Scientific and Technical Information of China (English)

    QIANG Hongfu; MENG Lijun

    2006-01-01

    The normal hypervelocity impact of an Al-thin plate by an Al-sphere was numerically simulated by using the adaptive smoothed particle hydrodynamics (ASPH) method.In this method,the isotropic smoothing algorithm of standard SPH is replaced with anisotropic smoothing involving ellipsoidal kernels whose axes evolve automatically to follow the mean particle spacing as it varies in time,space,and direction around each particle.Using the ASPH,the anisotropic volume changes under strong shock condition are captured more accurately and clearly.The sophisticated features of meshless and Lagrangian nature inherent in the SPH method are kept for treating large deformations,large inhomogeneities and tracing free surfaces in the extremely transient impact process.A two-dimensional ASPH program is coded with C + +.The developed hydrocode is examined for example problems of hypervelocity impacts of solid materials.The results obtained from the numerical simulation are compared with available experimental ones.Good agreement is observed.

  8. Theoretical studies of potential energy surfaces and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, R. [Argonne National Laboratory, IL (United States)

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  9. Research on Quantum Authentication Methods for the Secure Access Control Among Three Elements of Cloud Computing

    Science.gov (United States)

    Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo

    2016-12-01

    Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.

  10. Research on Quantum Authentication Methods for the Secure Access Control Among Three Elements of Cloud Computing

    Science.gov (United States)

    Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo

    2016-08-01

    Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.

  11. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    Science.gov (United States)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  12. Numerical Methods for Computing Turbulence-Induced Noise

    Science.gov (United States)

    2005-12-16

    consider the finite dimensional subspace Vhl C Vh . Let vhi -= phlu be the optimal representation of u in Vhl and phi : V+_+ Vhl be the appropriate...mapping. We consider the following numerical method which is obtained by replacing h with hi in (2.4). Find uhl E Vhi , such that B(whi, uhl) + M(whUhl, f...the same functional form of the model that leads to the optimal solution on Vh, also leads to the optimal solution on Vhi . Thus, requiring uhl = vh

  13. Conference on Boundary and Interior Layers : Computational and Asymptotic Methods

    CERN Document Server

    2015-01-01

    This volume offers contributions reflecting a selection of the lectures presented at the international conference BAIL 2014, which was held from 15th to 19th September 2014 at the Charles University in Prague, Czech Republic. These are devoted to the theoretical and/or numerical analysis of problems involving boundary and interior layers and methods for solving these problems numerically. The authors are both mathematicians (pure and applied) and engineers, and bring together a large number of interesting ideas. The wide variety of topics treated in the contributions provides an excellent overview of current research into the theory and numerical solution of problems involving boundary and interior layers.  .

  14. Phenomenography and grounded theory as research methods in computing education research field

    Science.gov (United States)

    Kinnunen, Päivi; Simon, Beth

    2012-06-01

    This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and analysis phases and the type of resulting outcomes of these methods. We also discuss the challenges and threats the both methods may pose to the researcher. We conclude that while aimed at tackling different types of research questions, both of these methods provide computing education researchers a useful tool in their research method toolbox.

  15. A New Method for Out-of-core Applications on Computational Grids

    Institute of Scientific and Technical Information of China (English)

    Tang Jianqi(唐剑琪); Fang Binxing; Hu Mingzeng

    2003-01-01

    More and more out-of-core problems that involve solving large amounts of data are researched by scientists. The computational grid provides a wide and scalable environment for those large scale computations. A new method supporting out-of-core computations on grids is presented in this paper. The framework and the data storage strategy are described, based on which an easy and efficient out-of-core programming interface is provided for the programmers.

  16. Moment-based method for computing the two-dimensional discrete Hartley transform

    Science.gov (United States)

    Dong, Zhifang; Wu, Jiasong; Shu, Huazhong

    2009-10-01

    In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.

  17. A COMPUTATIONAL METHOD FOR INTERVAL MIXED VARIABLE ENERGY MATRICES IN PRECISE INTEGRATION

    Institute of Scientific and Technical Information of China (English)

    高索文; 吴志刚; 王本利; 马兴瑞

    2001-01-01

    To solve the Riccati equation of LQ control problem, the computation of interval mixed variable energy matrices is the first step. Taylor expansion can be used to compute the matrices. According to the analogy between structural mechanics and optimal control and the mechanical implication of the matrices, a computational method using state transition matrix of differential equation was presented. Numerical examples are provided to show the effectiveness of the present approach.

  18. Application of the OPTEX method for computing reflector parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hebert, A. [Ecole Polytechnique de Montreal, C.P. 6079 suce. Centre-Ville, Montreal QC. H3C 3A7 (Canada); Leroyer, H. [EDF - R and D, SINETICS, 1 Avenue du General de Gaulle, 92141 Clamart (France)

    2013-07-01

    We are investigating the OPTEX reflector model for obtaining few-group reflector parameters consistent with a reference power distribution in the core. In our study, the reference power distribution is obtained using a 142,872-region calculation defined over a 2D eighth-of-core pressurized water reactor and performed with the method of characteristics. The OPTEX method is based on generalized perturbation theory and uses an optimization algorithm known as parametric linear complementarity pivoting. The proposed model leads to few-group diffusion coefficients or P1-weighted macroscopic total cross sections that can be used to represent the reflector in full-core calculations. These few-group parameters can be spatially heterogeneous in order to correctly represent steel baffles present in modern pressurized water reactors. The optimal reflector parameters are compared to those obtained with a flux-volume weighting of the reflector cross sections recovered from the reference calculation. Important improvements in full-core power distribution are observed when the optimal parameters are used. (authors)

  19. Computer methods for ITER-like materials LIBS diagnostics

    Science.gov (United States)

    Łepek, Michał; Gąsior, Paweł

    2014-11-01

    Recent development of Laser-Induced Breakdown Spectroscopy (LIBS) caused that this method is considered as the most promising for future diagnostic applications for characterization of the deposited materials in the International Thermonuclear Experimental Reactor (ITER), which is currently under construction. In this article the basics of LIBS are shortly discussed and the software for spectra analyzing is presented. The main software function is to analyze measured spectra with respect to the certain element lines presence. Some program operation results are presented. Correct results for graphite and aluminum are obtained although identification of tungsten lines is a problem. The reason for this is low tungsten lines intensity, and thus low signal to noise ratio of the measured signal. In the second part artificial neural networks (ANNs) as the next step for LIBS spectra analyzing are proposed. The idea is focused on multilayer perceptron network (MLP) with backpropagation learning method. The potential of ANNs for data processing was proved through application in several LIBS-related domains, e.g. differentiating ancient Greek ceramics (discussed). The idea is to apply an ANN for determination of W, Al, C presence on ITER-like plasma-facing materials.

  20. Efficient computation method for two-dimensional nonlinear waves

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The theory and simulation of fully-nonlinear waves in a truncated two-dimensional wave tank in time domain are presented. A piston-type wave-maker is used to generate gravity waves into the tank field in finite water depth. A damping zone is added in front of the wave-maker which makes it become one kind of absorbing wave-maker and ensures the prescribed Neumann condition. The efficiency of nmerical tank is further enhanced by installation of a sponge layer beach (SLB) in front of downtank to absorb longer weak waves that leak through the entire wave train front. Assume potential flow, the space- periodic irrotational surface waves can be represented by mixed Euler- Lagrange particles. Solving the integral equation at each time step for new normal velocities, the instantaneous free surface is integrated following time history by use of fourth-order Runge- Kutta method. The double node technique is used to deal with geometric discontinuity at the wave- body intersections. Several precise smoothing methods have been introduced to treat surface point with high curvature. No saw-tooth like instability is observed during the total simulation.The advantage of proposed wave tank has been verified by comparing with linear theoretical solution and other nonlinear results, excellent agreement in the whole range of frequencies of interest has been obtained.

  1. A Computer Vision Method for 3D Reconstruction of Curves-Marked Free-Form Surfaces

    Institute of Scientific and Technical Information of China (English)

    Xiong Hanwei; Zhang Xiangwei

    2001-01-01

    Visual method is now broadly used in reverse engineering for 3D reconstruction. Thetraditional computer vision methods are feature-based, i.e., they require that the objects must revealfeatures owing to geometry or textures. For textureless free-form surfaces, dense feature points areadded artificially. In this paper, a new method is put forward combining computer vision with CAGD.The surface is subdivided into N-side Gregory patches using marked curves, and a stereo algorithm isused to reconstruct the curves. Then, the cross boundary tangent vector is computed throughreflectance analysis. At last, the whole surface can be reconstructed by jointing these patches withG1 continuity.

  2. Comparison of evaporation computation methods, Pretty Lake, Lagrange County, northeastern Indiana

    Science.gov (United States)

    Ficke, John F.

    1972-01-01

    Evaporation from Pretty Lake has been computed for a 2%- year period between 1963 and 1965 by the use of an energy budget, mass-transfer parameters, a water budget, a class-A pan, and a computed pan evaporation technique. The seasonal totals for the different methods are within 8 percent of their mean and are within 11 percent of the rate of 79 centimeters (31 inches) per year determined from published maps that are based on evaporation-pan data. Period-by-period differences among the methods are larger than the annual differences, but there is a general agreement among the evaporation hydrographs produced by the different computation methods.

  3. Simple and fast cosine approximation method for computer-generated hologram calculation.

    Science.gov (United States)

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi

    2015-12-14

    The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation.

  4. USING COMPUTER-BASED TESTING AS ALTERNATIVE ASSESSMENT METHOD OF STUDENT LEARNING IN DISTANCE EDUCATION

    Directory of Open Access Journals (Sweden)

    Amalia SAPRIATI

    2010-04-01

    Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.

  5. Distribution Network Fault Diagnosis Method Based on Granular Computing-BP

    Directory of Open Access Journals (Sweden)

    CHEN Zhong-xiao

    2013-01-01

    Full Text Available To deal with the complexity and uncertainty of distribution network fault information, a method of fault diagnosis based on granular computing and BP is proposed. This method uses attribute reduction advantages of granular computing theory and self-learning and knowledge acquisition ability of BP neural network. It put granular computing theory as the front-end processor of the BP neural network, namely simplify primitive information making use of granular computing reduction, and according to the concepts of relative granularity and significance of attributes based on binary granular computing are proposed to select input of BP, thereby reducing solving scale, and then construct neural network based on the minimum attribute sets, using BP neural network to model and parameter identify, reduce the BP study training time, improve the accuracy of the fault diagnosis. The distribution network example verifies the rationality and effectiveness of the proposed method.

  6. A parallel finite-difference method for computational aerodynamics

    Science.gov (United States)

    Swisshelm, Julie M.

    1989-01-01

    A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.

  7. Skin Burns Degree Determined by Computer Image Processing Method

    Science.gov (United States)

    Li, Hong-yan

    In this paper a new method determining the degree of skin burns in quantities is put forward. Firstly, with Photoshop9.0 software, we analyzed the statistical character of skin burns images' histogram, and then turned the images of burned skins from RGB color space to HSV space, to analyze the transformed color histogram. Lastly through Photoshop9.0 software we get the percentage of the skin burns area. We made the mean of images' histogram,the standard deviation of color maps,and the percentage of burned areas as indicators of evaluating burns,then distributed indicators the weighted values,at last get the burned scores by summing the products of every indicator of the burns and the weighted values. From the classification of burned scores, the degree of burns can be evaluated.

  8. Numerical computation of sapphire crystal growth using heat exchanger method

    Science.gov (United States)

    Lu, Chung-Wei; Chen, Jyh-Chen

    2001-05-01

    The finite element software FIDAP is employed to study the temperature and velocity distribution and the interface shape during a large sapphire crystal growth process using a heat exchanger method (HEM). In the present study, the energy input to the crucible by the radiation and convection inside the furnace and the energy output through the heat exchanger is modeled by the convection boundary conditions. The effects of the various growth parameters are studied. It is found that the contact angle is obtuse before the solid-melt interface touches the sidewall of the crucible. Therefore, hot spots always appear in this process. The maximum convexity decreases significantly when the cooling-zone radius (RC) increases. The maximum convexity also decreases significantly as the combined convection coefficient inside the furnace (hI) decreases.

  9. Hugh E. Huxley: cambiando el paradigma de la contracción muscular, desde dentro. [Hugh E. Huxley: changing from inside the paradigm of muscle contraction].

    Directory of Open Access Journals (Sweden)

    Adolfo Araci

    2014-04-01

    Full Text Available Desde un punto de vista físico se puede entender la fibra muscular como un motor, es decir, un sistema capaz de transformar la energía química en energía mecánica, que se utiliza para realizar un trabajo. Por lo tanto, para entender cómo ocurre dicho proceso de transformación es necesario conocer la ultraestructura de la fibra muscular. Esta es, sin duda, la principal aportación al acervo científico del recientemente fallecido Hugh Emor Huxley (1924-2013. Huxley se graduó en Física en el Christ’s College de Cambdrige, tras ver interrumpidos temporalmente sus estudios por su dedicación como operario de radar de la Royal Air Force entre 1943 y 1947, durante la Segunda Guerra Mundial (Pollard y Goldman, 2013; Spudich, 2013. Tras su graduación se incorporó como el primer estudiante de doctorado a una unidad de nueva creación del Medical Research Council, el “Laboratorio de Biología Molecular”, dirigida por Max Perutz y John Kendrew, siendo este último el director de sus tesis doctoral, que fue defendida en 1952 (Spudich, 2013. El uso de la técnicas de difracción de rayos X, que permitían el estudio en niveles de resolución inalcanzables con las técnicas microscópicas del momento, y su aplicación a la descripción de la ultraestructura muscular, fueron el objeto de su tesis doctoral, para el desarrollo de la cual diseñó y construyó sus propios instrumentos. El esclarecimiento de los aspectos estructurales implicados en la contracción muscular se convirtió desde entonces en la pregunta central de su biografía científica.

  10. A Method of Computer Simulation for Derivative Weave and Composed Weave

    Institute of Scientific and Technical Information of China (English)

    CHEN Jun-yan; WANG Jun

    2006-01-01

    In this paper, we investigate the method of computer simulation for derivative weave by adopting the way of weave differentiation with mathematical function. And on that base, we explore the way for dealing with composed weave. It is purposed for building a simple and efficient way of computer simulation for weave.

  11. Computer-Managed Practice: Effects on Instructional Methods and on Teacher Adoption.

    Science.gov (United States)

    Hativa, Nira; And Others

    1990-01-01

    Israeli teachers (N=142) were surveyed in a study designed to identify the effects of computer-managed practice on teaching methods, factors which encouraged teachers' adoption of computers for instruction, and the effects of this type of instruction on student learning. (IAH)

  12. A Naturalistic Method for Assessing the Learning of Arithmatic from Computer-Aided Practice.

    Science.gov (United States)

    Hativa, Nira

    1986-01-01

    A study used the naturalistic method of inquiry in order to investigate the CAI (Computer Assisted Instruction) contribution to students' performance in arithmetic and to identify possible problems. The study was aimed at understanding the holistic environment of students' individualized drill in arithmetic with the computer. (BS)

  13. Efficient computational methods to study new and innovative signal detection techniques in SETI

    Science.gov (United States)

    Deans, Stanley R.

    1991-01-01

    The purpose of the research reported here is to provide a rapid computational method for computing various statistical parameters associated with overlapped Hann spectra. These results are important for the Targeted Search part of the Search for ExtraTerrestrial Intelligence (SETI) Microwave Observing Project.

  14. Computing solubility products using ab initio methods; precipitation of NbC in low alloyed steel

    NARCIS (Netherlands)

    Klymko, T.; Sluiter, M.H.F.

    2012-01-01

    The solubility product of NbC in low alloyed steel is computed from electronic density functional methods including the effects of electronic, vibrational, and magnetic excitations. Although many simplifications are made in the computations, agreement with experimental data is within the scatter of

  15. Performance evaluation of moment-method codes on an Intel iPSC/860 hypercube computer

    Energy Technology Data Exchange (ETDEWEB)

    Klimkowski, K.; Ling, H. (Texas Univ., Austin (United States))

    1993-09-01

    An analytical evaluation is conducted of the performance of a moment-method code on a parallel computer, treating algorithmic complexity costs within the framework of matrix size and the 'subblock-size' matrix-partitioning parameter. A scaled-efficiencies analysis is conducted for the measured computation times of the matrix-fill operation and LU decomposition. 6 refs.

  16. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    Science.gov (United States)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  17. Calculation method of reflectance distributions for computer-generated holograms using the finite-difference time-domain method.

    Science.gov (United States)

    Ichikawa, Tsubasa; Sakamoto, Yuji; Subagyo, Agus; Sueoka, Kazuhisa

    2011-12-01

    The research on reflectance distributions in computer-generated holograms (CGHs) is particularly sparse, and the textures of materials are not expressed. Thus, we propose a method for calculating reflectance distributions in CGHs that uses the finite-difference time-domain method. In this method, reflected light from an uneven surface made on a computer is analyzed by finite-difference time-domain simulation, and the reflected light distribution is applied to the CGH as an object light. We report the relations between the surface roughness of the objects and the reflectance distributions, and show that the reflectance distributions are given to CGHs by imaging simulation.

  18. Methods for computing weighting tables based on local power expansion for tristimulus values computations.

    Science.gov (United States)

    Li, Changjun; Oleari, Claudio; Melgosa, Manuel; Xu, Yang

    2011-11-01

    In this paper, two types of weighting tables are derived by applying the local power expansion method proposed by Oleari [Color Res. Appl. 25, 176 (2000)]. Both tables at two different levels consider the deconvolution of the spectrophotometric data for monochromator triangular transmittance. The first one, named zero-order weighting table, is similar to weighting table 5 of American Society for Testing and Materials (ASTM) used with the measured spectral reflectance factors (SRFs) corrected by the Stearns and Stearns formula. The second one, named second-order weighting table, is similar to weighting table 6 of ASTM and must be used with the undeconvoluted SRFs. It is hoped that the results of this paper will aid the International Commission on Illumination TC 1-71 on tristimulus integration in focusing on ongoing methods, testing, and recommendations.

  19. Long Term Solar Radiation Forecast Using Computational Intelligence Methods

    Directory of Open Access Journals (Sweden)

    João Paulo Coelho

    2014-01-01

    Full Text Available The point prediction quality is closely related to the model that explains the dynamic of the observed process. Sometimes the model can be obtained by simple algebraic equations but, in the majority of the physical systems, the relevant reality is too hard to model with simple ordinary differential or difference equations. This is the case of systems with nonlinear or nonstationary behaviour which require more complex models. The discrete time-series problem, obtained by sampling the solar radiation, can be framed in this type of situation. By observing the collected data it is possible to distinguish multiple regimes. Additionally, due to atmospheric disturbances such as clouds, the temporal structure between samples is complex and is best described by nonlinear models. This paper reports the solar radiation prediction by using hybrid model that combines support vector regression paradigm and Markov chains. The hybrid model performance is compared with the one obtained by using other methods like autoregressive (AR filters, Markov AR models, and artificial neural networks. The results obtained suggests an increasing prediction performance of the hybrid model regarding both the prediction error and dynamic behaviour.

  20. Computer Simulation Methods for Crushing Process in an Jaw Crusher

    Science.gov (United States)

    Il'ich Beloglazov, Ilia; Andreevich Ikonnikov, Dmitrii

    2016-08-01

    One of the trends at modern mining enterprises is the application of combined systems for extraction and transportation of the rock mass. Given technology involves the use the conveyor lines as a continuous link of combined technology. The application of a conveyor transport provides significant reduction of costs for energy resources, increase in labor productivity and process automation. However, the use of a conveyor transport provides for certain requirements for the quality of transported material. The maximum size of the rock mass pieces is one of the basic parameters for it. The crushing plants applies as a coarse crushing followed by crushing the material to the maximum size of piece which possible to use for conveyor transport. It is often represented by jaw crushers. Modelling of crushing process in jaw crushers allows to maximally optimize workflow and increase efficiency of the equipment at the further transportation and processing of rocks. We studied the interaction between walls of the jaw crusher and bulk material by using discrete element method (DEM) in this paper. The article examines the process of modeling by stages. It includes design of the crusher construction in solid and surface modeling system. Modelling of the crushing process based on the experimental data received via the crushing unit BOYD. The process of destruction and particle size distribution in the study was done. Analysis of research results shows a comparability of actual experiment and modeling process.

  1. Computation of a turbulent channel flow using PDF method

    Energy Technology Data Exchange (ETDEWEB)

    Minier, J.P. [Electricite de France (EDF), 78 - Chatou (France). Lab. National d`Hydraulique; Pozorski, J. [Polish Academy of Sciences, Gdansk (Poland). Inst. of Fluid-Flow Machinery

    1997-05-01

    The purpose of the present paper is to present an analysis of a PDF model (Probability Density Function) and an illustration of the possibilities offered by such a method for a high-Reynolds turbulent channel flow. The first part presents the principles of the PDF approach and the introduction of stochastic processes along with a Lagrangian point of view. The model retained is the one put forward by Pope (1991) and includes evolution equations for location, velocity and dissipation of a large number of particles. Wall boundary conditions are then developed for particles. These conditions allow statistical results of the logarithmic region to be correctly reproduced. Simulation of non-homogeneous flows require a pressure-gradient algorithm which is briefly described. Developments are validated by analysing numerical predictions with respect to Comte Bellot experimental data (1965) on a channel flow. This example illustrates the ability of the approach to simulate wall-bounded flows and to provide detailed information such as skewness and flatness factors. (author) 9 refs.

  2. Computation of Floquet Multipliers Using an Iterative Method for Variational Equations

    Science.gov (United States)

    Nureki, Yu; Murashige, Sunao

    This paper proposes a new method to numerically obtain Floquet multipliers which characterize stability of periodic orbits of ordinary differential equations. For sufficiently smooth periodic orbits, we can compute Floquet multipliers using some standard numerical methods with enough accuracy. However, it has been reported that these methods may produce incorrect results under some conditions. In this work, we propose a new iterative method to compute Floquet multipliers using eigenvectors of matrix solutions of the variational equations. Numerical examples show effectiveness of the proposed method.

  3. Recursive Parameter Method for Computing the Predicting Function of the Multivariable ARMAX Model

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    New method for computing the predicting function of the ARMAX model is proposed. The proposed method constructs a set of schemes for recursively computing the parameters in predicting function of the ARMAX model. In contrast to the existing method, that only gives results for the special case of the ARX model, the method presented is suitable not only for an SISO system, but also for an MIMO system. For the SISO system, the method presented here is even more convenient than the exisiting ones.

  4. Static Dependency Pair Method based on Strong Computability for Higher-Order Rewrite Systems

    CERN Document Server

    Kusakari, Keiichirou; Sakai, Masahiko; Blanqui, Frédéric

    2011-01-01

    Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include \\lambda-abstraction but STRSs do not, we restructure the static dependency pair method to allow \\lambda-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

  5. The Long-Time Behavior of Legendre Spectral Approximation to Fitz-Hugh-Nagumo Equation%Fitz-Hugh-Nagumo方程Legendre谱逼近的长时间性态

    Institute of Scientific and Technical Information of China (English)

    张伟斌

    2005-01-01

    利用Legendre谱方法对Fitz-Hugh-Nagumo方程在空间方向半离散,得到了其近似解的误差估计,并且证明了近似整体吸引子的存在性和上半连续性,从而为研究该方程的长时间行为提供了一个有效的算法.

  6. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    Science.gov (United States)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  7. The parameterization method for invariant manifolds from rigorous results to effective computations

    CERN Document Server

    Haro, Àlex; Figueras, Jordi-Lluis; Luque, Alejandro; Mondelo, Josep Maria

    2016-01-01

    This monograph presents some theoretical and computational aspects of the parameterization method for invariant manifolds, focusing on the following contexts: invariant manifolds associated with fixed points, invariant tori in quasi-periodically forced systems, invariant tori in Hamiltonian systems and normally hyperbolic invariant manifolds. This book provides algorithms of computation and some practical details of their implementation. The methodology is illustrated with 12 detailed examples, many of them well known in the literature of numerical computation in dynamical systems. A public version of the software used for some of the examples is available online. The book is aimed at mathematicians, scientists and engineers interested in the theory and applications of computational dynamical systems.

  8. Analytical and numerical methods for computing electron partial intensities in the case of multilayer systems

    Energy Technology Data Exchange (ETDEWEB)

    Afanas’ev, Victor P., E-mail: afanasyevvip@gmail.com [Department of General Physics and Nuclear Fusion, National Research University “Moscow Power Engineering Institute”, Krasnokazarmennaya, 14, Moscow 111250 (Russian Federation); Efremenko, Dmitry S. [Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Methodik der Fernerkundung (IMF), 82234 Oberpfaffenhofen (Germany); Kaplya, Pavel S., E-mail: pavel@kaplya.com [Department of General Physics and Nuclear Fusion, National Research University “Moscow Power Engineering Institute”, Krasnokazarmennaya, 14, Moscow 111250 (Russian Federation)

    2016-07-15

    Highlights: • The OKG-model is extended to finite thickness layers. • An efficient matrix technique for computing partial intensities is proposed. • Good agreement is obtained for computed partial intensities and experimental data. - Abstract: We present two novel methods for computing energy spectra and angular distributions of electrons emitted from multi-layer solids. They are based on the Ambartsumian–Chandrasekhar (AC) equations obtained by using the invariant imbedding method. The first method is analytical and relies on a linearization of AC equations and the use of the small-angle approximation. The corresponding solution is in good agreement with that computed by using the Oswald–Kasper–Gaukler (OKG) model, which is extended to the case of layers of finite thickness. The second method is based on the discrete ordinate formalism and relies on a transformation of the AC equations to the algebraic Ricatti and Lyapunov equations, which are solved by using the backward differential formula. Unlike the previous approach, this method can handle both linear and nonlinear equations. We analyze the applicability of the proposed methods to practical problems of computing REELS spectra. To demonstrate the efficiency of the proposed methods, several computational examples are considered. Obtained numerical and analytical solutions show good agreement with the experimental data and Monte-Carlo simulations. In addition, the impact of nonlinear terms in the Ambartsumian–Chandrasekhar equations is analyzed.

  9. COMPUTATIONAL METHODS FOR STUDYING THE INTERACTION BETWEEN POLYCYCLIC AROMATIC HYDROCARBONS AND BIOLOGICAL MACROMOLECULES

    Science.gov (United States)

    Computational Methods for Studying the Interaction between Polycyclic Aromatic Hydrocarbons and Biological Macromolecules .The mechanisms for the processes that result in significant biological activity of PAHs depend on the interaction of these molecules or their metabol...

  10. COMPUTER MODELING AS ONE OF CONTEMPORARY METHODS OF FORECASTING IN PHARMACEUTICAL TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    S. O. Losenkova

    2014-01-01

    Full Text Available The work presents researches devoted to the forecasting of compatibility of additive and drug substances with a method of computer modeling for its use in composition elaboration and technology of transdermal therapeutic systems.

  11. Method for Determining Language Objectives and Criteria. Volume II. Methodological Tools: Computer Analysis, Data Collection Instruments.

    Science.gov (United States)

    1979-05-25

    This volume presents (1) Methods for computer and hand analysis of numerical language performance data (includes examples) (2) samples of interview, observation, and survey instruments used in collecting language data. (Author)

  12. A computer-supported method to reveal and assess Personal Professional Theories in vocational education

    NARCIS (Netherlands)

    van den Bogaart, Antoine C.M.; Bilderbeek, Richardus; Schaap, Harmen; Hummel, Hans G.K.; Kirschner, Paul A.

    2016-01-01

    This article introduces a dedicated, computer-supported method to construct and formatively assess open, annotated concept maps of Personal Professional Theories (PPTs). These theories are internalised, personal bodies of formal and practical knowledge, values, norms and convictions that professiona

  13. A Critical Review of Computational Methods and Their Application in Industrial Fan Design

    OpenAIRE

    Alessandro Corsini; Giovanni Delibra; Sheard, Anthony G.

    2013-01-01

    Members of the aerospace fan community have systematically developed computational methods over the last five decades. The complexity of the developed methods and the difficulty associated with their practical application ensured that, although commercial computational codes date back to the 1980s, they were not fully exploited by industrial fan designers until the beginning of the 2000s. The application of commercial codes proved to be problematic as, unlike aerospace fans, industrial fans i...

  14. The Application of SCC-DV-Xα Computational Method of Quantum Chemistry in Cement Chemistry

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It has been explored why quantum chemistry is applied to the research field of cement chemistry. The fundamental theory of SCC-DV-Xα computational method of quantum chemistry is synopsized. The results obtained by computational quantum chemistry method in recent years of valence-bond structures and hydration activity of some cement clinker minerals, mechanical strength and stabilization of some hydrates are summarized and evaluated. Finally the prospects of the future application of quantum chemistry to cement chemistry are depicted.

  15. A mixed-methods exploration of an environment for learning computer programming

    OpenAIRE

    Mather, Richard

    2015-01-01

    A mixed-methods approach is evaluated for exploring collaborative behaviour, acceptance and progress surrounding an interactive technology for learning\\ud computer programming. A review of literature reveals a compelling case for using mixed-methods approaches when evaluating technology-enhanced-learning environments.\\ud Here, ethnographic approaches used for the requirements engineering of computing systems are combined with questionnaire-based feedback and skill tests. These are applied to ...

  16. Computing the $\\sin_{p}$ function via the inverse power method

    CERN Document Server

    Biezuner, Rodney Josué; Martins, Eder Marinho

    2010-01-01

    In this paper, we discuss a new iterative method for computing $\\sin_{p}$. This function was introduced by Lindqvist in connection with the unidimensional nonlinear Dirichlet eigenvalue problem for the $p$-Laplacian. The iterative technique was inspired by the inverse power method in finite dimensional linear algebra and is competitive with other methods available in the literature.

  17. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    Science.gov (United States)

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  18. Methods, algorithms and tools in computational proteomics: a practical point of view.

    Science.gov (United States)

    Matthiesen, Rune

    2007-08-01

    Computational MS-based proteomics is an emerging field arising from the demand of high throughput analysis in numerous large-scale experimental proteomics projects. The review provides a broad overview of a number of computational tools available for data analysis of MS-based proteomics data and gives appropriate literature references to detailed description of algorithms. The review provides, to some extent, discussion of algorithms and methods for peptide and protein identification using MS data, quantitative proteomics, and data storage. The hope is that it will stimulate discussion and further development in computational proteomics. Computational proteomics deserves more scientific attention. There are far fewer computational tools and methods available for proteomics compared to the number of microarray tools, despite the fact that data analysis in proteomics is much more complex than microarray analysis.

  19. Delcam立足中国再创辉煌--访Delcam公司总经理Hugh Humphreys

    Institute of Scientific and Technical Information of China (English)

    王崇民; 朱旭

    2004-01-01

    @@ 进入2004年以来,Delcam(中国)公司表现得非常活跃,在进行一系列人事调整之后,西安办事处正式挂牌,紧接着又将北京迪勒克斯科技发展有限公司纳入旗下,使其成为Delcam(中国)华北地区增值经销商,在签约仪式上,记者见到了专程从英国前来祝贺的Delcam PLC集团公司总经理HughHumphreys先生,开朗、风趣的HughHumphreys先生非常愉快地接受了本刊记者的采访.

  20. 2002 Report to Congress: Evaluating the Consensus Best Practices Developed through the Howard Hughes Medical Institute’s Collaborative Hazardous Waste Management Demonstration Project

    Science.gov (United States)

    This report discusses a collaborative project initiated by the Howard Hughes Medical Institute (HHMI) to establish and evaluate a performance-based approach to management of hazardous wastes in the laboratories of academic research institutions.