NSC KIPT Linux cluster for computing within the CMS physics program
Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.
2002-01-01
The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out
Gohar, Y.; Zhong, Z.; Talamo, A.
2009-01-01
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the
Plant model of KIPT neutron source facility simulator
Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry
2016-01-01
Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.
Plant model of KIPT neutron source facility simulator
Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-02-01
Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.
Accelerator shield design of KIPT neutron source facility
Zhong, Z.; Gohar, Y.
2013-01-01
Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generated by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total
Zhong, Z.; Gohar, Y.; Talamo, A.
2009-01-01
Argonne National Laboratory (ANL) of USA and Kharkov Inst. of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical facility (ADS). The facility will be utilized for basic research, medical isotopes production, and training young nuclear specialists. The burnup methodology and analysis of the KIPT ADS are presented in this paper. MCNPX and MCB Monte Carlo computer codes have been utilized. MCNPX has the capability of performing electron, photon and neutron coupled transport problems, but it lacks the burnup capability for driven subcritical systems. MCB has the capability for performing the burnup calculation of driven subcritical systems, while it cannot transport electrons. A calculational methodology coupling MCNPX and MCB has been developed, which can exploit the electrons transport capability of MCNPX for neutron production and the burnup capability of MCB for driven subcritical systems. In this procedure, a neutron source file is generated using MCNPX transport calculation, preserving the neutrons yield from photonuclear reactions initiated by electrons, and this source file is utilized by MCB for the burnup analyses with the same geometrical model. In this way, the ADS depletion calculation can be accurately. (authors)
Nature, computation and complexity
Binder, P-M; Ellis, G F R
2016-01-01
The issue of whether the unfolding of events in the world can be considered a computation is explored in this paper. We come to different conclusions for inert and for living systems (‘no’ and ‘qualified yes’, respectively). We suggest that physical computation as we know it exists only as a tool of complex biological systems: us. (paper)
Development of Plasma Technologies at IPP NSC KIPT
Tereshin, V.I.; Chebotarev, V.V.; Garkusha, I.E.; Lapshin, V.I.; Shvets, O.M.; Taran, V.S.
2001-01-01
Plasma Technologies in Institute of Plasma Physics of the NSC KIPT are recently developed in the following directions. Material surfaces modification under their irradiation with pulsed plasma streams of different working gases. Besides traditional analysis of improvements of tribological characteristics and structural-phase changes of the modified layers recently we started investigations of material corrosions characteristic improvement under influence of pulsed plasma on the material surfaces. As to the surface coatings in arc discharges of Bulat type devices, new trends are related with multi-layers coatings, using Ti-AI-N coatings in cutting tools, using high frequency discharges or combined HF- and arc discharges for increasing the nomenclature of goods to be coated. Development of ozonators is respectively new area for IPP NSC KIPT. On the base of barrier high-frequency discharge there were developed a number of high efficiency ozonators with ozone production up to 100 G/hour. (author)
Theories of computational complexity
Calude, C
1988-01-01
This volume presents four machine-independent theories of computational complexity, which have been chosen for their intrinsic importance and practical relevance. The book includes a wealth of results - classical, recent, and others which have not been published before.In developing the mathematics underlying the size, dynamic and structural complexity measures, various connections with mathematical logic, constructive topology, probability and programming theories are established. The facts are presented in detail. Extensive examples are provided, to help clarify notions and constructions. The lists of exercises and problems include routine exercises, interesting results, as well as some open problems.
Computability, complexity, logic
Börger, Egon
1989-01-01
The theme of this book is formed by a pair of concepts: the concept of formal language as carrier of the precise expression of meaning, facts and problems, and the concept of algorithm or calculus, i.e. a formally operating procedure for the solution of precisely described questions and problems. The book is a unified introduction to the modern theory of these concepts, to the way in which they developed first in mathematical logic and computability theory and later in automata theory, and to the theory of formal languages and complexity theory. Apart from considering the fundamental themes an
Accelerating complex for basic researches in the nuclear physics
Dovbnya, A.N.; Guk, I.S.; Kononenko, S.G.; Peev, F.A.; Tarasenko, A.S.; Botman, J.I.M.
2009-01-01
In 2003 in NSC KIPT was begun the work on development the project of accelerator, base facility IHEPNP NSC KIPT electron recirculator SALO. The accelerator will be disposed in target hall of accelerator LU 2000 complex. It is projected first of all as facility for basic researches in the field of
Analysis of fuel management in the KIPT neutron source facility
Zhong Zhaopeng, E-mail: zzhong@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Gohar, Yousry; Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)
2011-05-15
Research highlights: > Fuel management of KIPT ADS was analyzed. > Core arrangement was shuffled in stage wise. > New fuel assemblies was added into core periodically. > Beryllium reflector could also be utilized to increase the fuel life. - Abstract: Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an experimental neutron source facility consisting of an electron accelerator driven sub-critical assembly. The neutron source driving the sub-critical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The sub-critical assembly surrounding the target is fueled with low enriched WWR-M2 type hexagonal fuel assemblies. The U-235 enrichment of the fuel material is <20%. The facility will be utilized for basic and applied research, producing medical isotopes, and training young specialists. With the 100 KW electron beam power, the total thermal power of the facility is {approx}360 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products continuously reduce the system reactivity during the operation, decrease the neutron flux level, and consequently impact the facility performance. To preserve the neutron flux level during the operation, the fuel assemblies should be added and shuffled for compensating the lost reactivity caused by burnup. Beryllium reflector could also be utilized to increase the fuel life time in the sub-critical core. This paper studies the fuel cycles and shuffling schemes of the fuel assemblies of the sub-critical assembly to preserve the system reactivity and the neutron flux level during the operation.
Analysis of fuel management in the KIPT neutron source facility
Zhong Zhaopeng; Gohar, Yousry; Talamo, Alberto
2011-01-01
Research highlights: → Fuel management of KIPT ADS was analyzed. → Core arrangement was shuffled in stage wise. → New fuel assemblies was added into core periodically. → Beryllium reflector could also be utilized to increase the fuel life. - Abstract: Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an experimental neutron source facility consisting of an electron accelerator driven sub-critical assembly. The neutron source driving the sub-critical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The sub-critical assembly surrounding the target is fueled with low enriched WWR-M2 type hexagonal fuel assemblies. The U-235 enrichment of the fuel material is <20%. The facility will be utilized for basic and applied research, producing medical isotopes, and training young specialists. With the 100 KW electron beam power, the total thermal power of the facility is ∼360 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products continuously reduce the system reactivity during the operation, decrease the neutron flux level, and consequently impact the facility performance. To preserve the neutron flux level during the operation, the fuel assemblies should be added and shuffled for compensating the lost reactivity caused by burnup. Beryllium reflector could also be utilized to increase the fuel life time in the sub-critical core. This paper studies the fuel cycles and shuffling schemes of the fuel assemblies of the sub-critical assembly to preserve the system reactivity and the neutron flux level during the operation.
Theory of computational complexity
Du, Ding-Zhu
2011-01-01
DING-ZHU DU, PhD, is a professor in the Department of Computer Science at the University of Minnesota. KER-I KO, PhD, is a professor in the Department of Computer Science at the State University of New York at Stony Brook.
Advances in computational complexity theory
Cai, Jin-Yi
1993-01-01
This collection of recent papers on computational complexity theory grew out of activities during a special year at DIMACS. With contributions by some of the leading experts in the field, this book is of lasting value in this fast-moving field, providing expositions not found elsewhere. Although aimed primarily at researchers in complexity theory and graduate students in mathematics or computer science, the book is accessible to anyone with an undergraduate education in mathematics or computer science. By touching on some of the major topics in complexity theory, this book sheds light on this burgeoning area of research.
Medical Isotope Production Analyses In KIPT Neutron Source Facility
Talamo, Alberto; Gohar, Yousry
2016-01-01
Medical isotope production analyses in Kharkov Institute of Physics and Technology (KIPT) neutron source facility were performed to include the details of the irradiation cassette and the self-shielding effect. An updated detailed model of the facility was used for the analyses. The facility consists of an accelerator-driven system (ADS), which has a subcritical assembly using low-enriched uranium fuel elements with a beryllium-graphite reflector. The beryllium assemblies of the reflector have the same outer geometry as the fuel elements, which permits loading the subcritical assembly with different number of fuel elements without impacting the reflector performance. The subcritical assembly is driven by an external neutron source generated from the interaction of 100-kW electron beam with a tungsten target. The facility construction was completed at the end of 2015, and it is planned to start the operation during the year of 2016. It is the first ADS in the world, which has a coolant system for removing the generated fission power. Argonne National Laboratory has developed the design concept and performed extensive design analyses for the facility including its utilization for the production of different radioactive medical isotopes. 99 Mo is the parent isotope of 99m Tc, which is the most commonly used medical radioactive isotope. Detailed analyses were performed to define the optimal sample irradiation location and the generated activity, for several radioactive medical isotopes, as a function of the irradiation time.
Medical Isotope Production Analyses In KIPT Neutron Source Facility
Talamo, Alberto [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-01-01
Medical isotope production analyses in Kharkov Institute of Physics and Technology (KIPT) neutron source facility were performed to include the details of the irradiation cassette and the self-shielding effect. An updated detailed model of the facility was used for the analyses. The facility consists of an accelerator-driven system (ADS), which has a subcritical assembly using low-enriched uranium fuel elements with a beryllium-graphite reflector. The beryllium assemblies of the reflector have the same outer geometry as the fuel elements, which permits loading the subcritical assembly with different number of fuel elements without impacting the reflector performance. The subcritical assembly is driven by an external neutron source generated from the interaction of 100-kW electron beam with a tungsten target. The facility construction was completed at the end of 2015, and it is planned to start the operation during the year of 2016. It is the first ADS in the world, which has a coolant system for removing the generated fission power. Argonne National Laboratory has developed the design concept and performed extensive design analyses for the facility including its utilization for the production of different radioactive medical isotopes. ^{99}Mo is the parent isotope of ^{99m}Tc, which is the most commonly used medical radioactive isotope. Detailed analyses were performed to define the optimal sample irradiation location and the generated activity, for several radioactive medical isotopes, as a function of the irradiation time.
Implicit computational complexity and compilers
Rubiano, Thomas
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non...... evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages...
Computational models of complex systems
Dabbaghian, Vahid
2014-01-01
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...
Ubiquitous Computing, Complexity and Culture
environments, experience time, and develop identities individually and socially. Interviews with working media artists lend further perspectives on these cultural transformations. Drawing on cultural theory, new media art studies, human-computer interaction theory, and software studies, this cutting-edge book......The ubiquitous nature of mobile and pervasive computing has begun to reshape and complicate our notions of space, time, and identity. In this collection, over thirty internationally recognized contributors reflect on ubiquitous computing’s implications for the ways in which we interact with our...... critically unpacks the complex ubiquity-effects confronting us every day....
KIPT accelerator-driven system design and performance
Gohar, Y.; Bolshinsky, I.; Karnaukhov, I.
2015-01-01
Argonne National Laboratory (ANL) of the US is collaborating with the Kharkov Institute of Physics and Technology (KIPT) of Ukraine to develop and construct a neutron source facility. The facility is planned to produce medical isotopes, train young nuclear professionals, support Ukraine's nuclear industry and provide capability to perform reactor physics, material research, and basic science experiments. It consists of a subcritical assembly with low-enriched uranium fuel driven with an electron accelerator. The target design utilises tungsten or natural uranium for neutron production through photonuclear reactions from the Bremsstrahlung radiation generated by 100-MeV electrons. The accelerator electron beam power is 100 KW. The neutron source intensity, spectrum, and spatial distribution have been studied as a function of the electron beam parameters to maximise the neutron yield and satisfy different engineering requirements. Physics, thermal-hydraulics, and thermal-stress analyses were performed and iterated to maximise the neutron source strength and to minimise the maximum temperature and the thermal stress in the target materials. The subcritical assembly is designed to obtain the highest possible neutron flux intensity with an effective neutron multiplication factor of <0.98. Different fuel and reflector materials are considered for the subcritical assembly design. The mechanical design of the facility has been developed to maximise its utility and minimise the time for replacing the target, fuel, and irradiation cassettes by using simple and efficient procedures. Shielding analyses were performed to define the dose map around the facility during operation as a function of the heavy concrete shield thickness. Safety, reliability and environmental considerations are included in the facility design. The facility is configured to accommodate future design upgrades and new missions. In addition, it has unique features relative to the other international
Computational complexity of Boolean functions
Korshunov, Aleksei D [Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)
2012-02-28
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
Computational complexity in entanglement transformations
Chitambar, Eric A.
In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are
Computational complexity a quantitative perspective
Zimand, Marius
2004-01-01
There has been a common perception that computational complexity is a theory of "bad news" because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, "bad news" is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a "bad news" result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively. The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length. One chapter is dedicated to abs...
Complex computation in the retina
Deshmukh, Nikhil Rajiv
Elucidating the general principles of computation in neural circuits is a difficult problem requiring both a tractable model circuit as well as sophisticated measurement tools. This thesis advances our understanding of complex computation in the salamander retina and its underlying circuitry and furthers the development of advanced tools to enable detailed study of neural circuits. The retina provides an ideal model system for neural circuits in general because it is capable of producing complex representations of the visual scene, and both its inputs and outputs are accessible to the experimenter. Chapter 2 describes the biophysical mechanisms that give rise to the omitted stimulus response in retinal ganglion cells described in Schwartz et al., (2007) and Schwartz and Berry, (2008). The extra response to omitted flashes is generated at the input to bipolar cells, and is separable from the characteristic latency shift of the OSR apparent in ganglion cells, which must occur downstream in the circuit. Chapter 3 characterizes the nonlinearities at the first synapse of the ON pathway in response to high contrast flashes and develops a phenomenological model that captures the effect of synaptic activation and intracellular signaling dynamics on flash responses. This work is the first attempt to model the dynamics of the poorly characterized mGluR6 transduction cascade unique to ON bipolar cells, and explains the second lobe of the biphasic flash response. Complementary to the study of neural circuits, recent advances in wafer-scale photolithography have made possible new devices to measure the electrical and mechanical properties of neurons. Chapter 4 reports a novel piezoelectric sensor that facilitates the simultaneous measurement of electrical and mechanical signals in neural tissue. This technology could reveal the relationship between the electrical activity of neurons and their local mechanical environment, which is critical to the study of mechanoreceptors
Mikahaylov, V.; Lapshin, V.; Ek, P.; Flyghed, L.; Nilsson, A.; Ooka, N.; Shimizu, K.; Tanuma, K.
2001-01-01
Full text: Since the Ukraine voluntarily accepted the status of a non-nuclear-weapons state and concluded a Safeguards Agreement with the IAEA, the Kharkov Institute of Physics and Technology (KIPT) as a nuclear facility using the nuclear material of category 1, has become a Ukrainian priority object for the international community's efforts to ensure nuclear non-proliferation measures and to bring the existing protection systems to the generally accepted security standards. In March 1996, at the meeting held under the auspices of the IAEA in Kiev, the representatives from Japan, Sweden and the USA agreed to provide technical assistance concerning improvement of the nuclear material accountancy and control and physical protection system (MPC and A) available at KIPT. The Technical Secretariat of the Japan-Ukraine Committee for Co-operation on Reducing Nuclear Weapons and Swedish Nuclear Power Inspectorate undertook to solve the most expensive and labour-consuming task namely, the upgrading of the perimeter protection system at KIPT. This included that the current perimeter system, comprising several kilometers, should be completely replaced. Besides the above-mentioned problem, the upgrading should be carried out with the institute in operation. Thus, it was not allowed to replace the existing protection system by a new one unless KIPT was constantly protected. This required the creation of a new protected zone that to a large extent was occupied by the communication equipment, buildings, trees and other objects interfering with the work. All these difficulties required very comprehensive development of the project design as well as a great deal of flexibility during the implementation of the project. These problems were all successfully resolved thanks to a well working project organization, composed of experts from KIPT, JAERI and ANS, involving the highly qualified Swedish technical experts who played a leading role. In the framework of implementation of the
Computability, complexity, and languages fundamentals of theoretical computer science
Davis, Martin D; Rheinboldt, Werner
1983-01-01
Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa
Computational Complexity of Combinatorial Surfaces
Vegter, Gert; Yap, Chee K.
1990-01-01
We investigate the computational problems associated with combinatorial surfaces. Specifically, we present an algorithm (based on the Brahana-Dehn-Heegaard approach) for transforming the polygonal schema of a closed triangulated surface into its canonical form in O(n log n) time, where n is the
Metasynthetic computing and engineering of complex systems
Cao, Longbing
2015-01-01
Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro
Cloud Computing for Complex Performance Codes.
Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Bioinspired computation in combinatorial optimization: algorithms and their computational complexity
Neumann, Frank; Witt, Carsten
2012-01-01
Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorials...... problems. Classical single objective optimization is examined first. They then investigate the computational complexity of bioinspired computation applied to multiobjective variants of the considered combinatorial optimization problems, and in particular they show how multiobjective optimization can help...... to speed up bioinspired computation for single-objective optimization problems. The tutorial is based on a book written by the authors with the same title. Further information about the book can be found at www.bioinspiredcomputation.com....
Software Accelerates Computing Time for Complex Math
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
Computational Modeling of Complex Protein Activity Networks
Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude
2017-01-01
Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a
Unified Computational Intelligence for Complex Systems
Seiffertt, John
2010-01-01
Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e
Numerical studies of the flux-to-current ratio method in the KIPT neutron source facility
Cao, Y.; Gohar, Y.; Zhong, Z.
2013-01-01
The reactivity of a subcritical assembly has to be monitored continuously in order to assure its safe operation. In this paper, the flux-to-current ratio method has been studied as an approach to provide the on-line reactivity measurement of the subcritical system. Monte Carlo numerical simulations have been performed using the KIPT neutron source facility model. It is found that the reactivity obtained from the flux-to-current ratio method is sensitive to the detector position in the subcritical assembly. However, if multiple detectors are located about 12 cm above the graphite reflector and 54 cm radially, the technique is shown to be very accurate in determining the k eff this facility in the range of 0.75 to 0.975. (authors)
Dose Consumptions of NSC KIPT Personnel during the Work that Includes Uranium
Kurilo, Yu.P.; Mazilov, A.V.; Razsukovanny, B.N.
2007-01-01
The analysis of mean annual and maximal annual uranium concentrations in the air of NSC KIPT working premises is carried out in this paper; the high limit of possible external radiation of category A personnel engaged on works with uranium since 1961 to 2003 has been calculated. The numerical values of effective doses of internal and external radiation of personnel are determined on the basis of data acquired. It is shown that despite of breaking the principal of non excess that has taken place to be in the some years the mean value of total (external+internal) radiation during the given period of time did not exceed the effective dose limit established in Norms of radiating safety of Ukraine and that is equal to 20 mSv per year
Computational Complexity and Human Decision-Making.
Bossaerts, Peter; Murawski, Carsten
2017-12-01
The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computational error and complexity in science and engineering computational error and complexity
Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K
2005-01-01
The book "Computational Error and Complexity in Science and Engineering pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...
Electron accelerator shielding design of KIPT neutron source facility
Zhong, Zhao Peng; Gohar, Yousry [Argonne National Laboratory, Argonne (United States)
2016-06-15
The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nuclear specialists. The biological shield of the accelerator building was designed to reduce the biological dose to less than 5.0e-03 mSv/h during operation. The main source of the biological dose for the accelerator building is the photons and neutrons generated from different interactions of leaked electrons from the electron gun and the accelerator sections with the surrounding components and materials. The Monte Carlo N-particle extended code (MCNPX) was used for the shielding calculations because of its capability to perform electron-, photon-, and neutron-coupled transport simulations. The photon dose was tallied using the MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is very small, ∼0.01 neutron for 100-MeV electron and even smaller for lower-energy electrons. This causes difficulties for the Monte Carlo analyses and consumes tremendous computation resources for tallying the neutron dose outside the shield boundary with an acceptable accuracy. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were utilized for this study. The generated neutrons were banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron dose. The weight windows variance reduction technique was also utilized for both neutron and photon dose
Computation of the Complex Probability Function
Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-22
The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n^{th} degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.
Odeychuk, N.P.; Zelensky, V.F.; Gurin, V.A.; Konotop, Yu.F.
2000-01-01
Data on NSC KIPT developments of absorber composites B 4 C-PyC and B 4 C-SiC are reported. Results of pre-reactor studies and reactor tests of absorber composites developed are given. It is shown that the B 4 C-PyC composites have a high radiation resistance at temperatures up to 1,250 deg C. (author)
Computing complex Airy functions by numerical quadrature
A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)
2001-01-01
textabstractIntegral representations are considered of solutions of the Airydifferential equation w''-z, w=0 for computing Airy functions for complex values of z.In a first method contour integral representations of the Airyfunctions are written as non-oscillating
Production of medical radioactive isotopes using KIPT electron driven subcritical facility
Talamo, Alberto; Gohar, Yousry
2008-01-01
Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, γ), (n, 2n), (n, p), and (γ, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope
Production of medical radioactive isotopes using KIPT electron driven subcritical facility
Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: alby@anl.gov; Gohar, Yousry [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)
2008-05-15
Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, {gamma}), (n, 2n), (n, p), and ({gamma}, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.
Production of medical radioactive isotopes using KIPT electron driven subcritical facility.
Talamo, Alberto; Gohar, Yousry
2008-05-01
Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, gamma), (n, 2n), (n, p), and (gamma, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.
Complex cellular logic computation using ribocomputing devices.
Green, Alexander A; Kim, Jongmin; Ma, Duo; Silver, Pamela A; Collins, James J; Yin, Peng
2017-08-03
Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our 'ribocomputing' systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.
A complex network approach to cloud computing
Travieso, Gonzalo; Ruggiero, Carlos Antônio; Bruno, Odemir Martinez; Costa, Luciano da Fontoura
2016-01-01
Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the clients’ tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlaid by Erdős–Rényi (ER) and Barabási-Albert (BA) topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of the cost of communication between the client and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter, the ER topology provides better performance than the BA for smaller average degrees and opposite behaviour for larger average degrees. With respect to cost, smaller values are found in the BA topology irrespective of the average degree. In addition, we also verified that it is easier to find good servers in ER than in BA networks. Surprisingly, balance and cost are not too much affected by the presence of communities. However, for a well-defined community network, we found that it is important to assign each server to a different community so as to achieve better performance. (paper: interdisciplinary statistical mechanics )
Probabilistic data integration and computational complexity
Hansen, T. M.; Cordua, K. S.; Mosegaard, K.
2016-12-01
Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under
On the complexity of computing two nonlinearity measures
Find, Magnus Gausdal
2014-01-01
We study the computational complexity of two Boolean nonlinearity measures: the nonlinearity and the multiplicative complexity. We show that if one-way functions exist, no algorithm can compute the multiplicative complexity in time 2O(n) given the truth table of length 2n, in fact under the same ...
Algebraic computability and enumeration models recursion theory and descriptive complexity
Nourani, Cyrus F
2016-01-01
This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...
[The Psychomat computer complex for psychophysiologic studies].
Matveev, E V; Nadezhdin, D S; Shemsudov, A I; Kalinin, A V
1991-01-01
The authors analyze the principles of the design of a computed psychophysiological system for universal uses. Show the effectiveness of the use of computed technology as a combination of universal computation and control potentialities of a personal computer equipped with problem-oriented specialized facilities of stimuli presentation and detection of the test subject's reactions. Define the hardware and software configuration of the microcomputer psychophysiological system "Psychomat". Describe its functional possibilities and the basic medico-technical characteristics. Review organizational issues of the maintenance of its full-scale production.
Computational Complexity of Bosons in Linear Networks
2017-03-01
is between one and two orders-of-magnitude more efficient than current heralded multiphoton sources based on spontaneous parametric downconversion...expected to perform tasks intractable for a classical computer, yet requiring minimal non-classical resources as compared to full- scale quantum computers...implementations to date employed sources based on inefficient processes—spontaneous parametric downconversion—that only simulate heralded single
International Symposium on Complex Computing-Networks
Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms
2006-01-01
This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.
Computing Hypercrossed Complex Pairings in Digital Images
Simge Öztunç
2013-01-01
Full Text Available We consider an additive group structure in digital images and introduce the commutator in digital images. Then we calculate the hypercrossed complex pairings which generates a normal subgroup in dimension 2 and in dimension 3 by using 8-adjacency and 26-adjacency.
Computer simulation of complexity in plasmas
Hayashi, Takaya; Sato, Tetsuya
1998-01-01
By making a comprehensive comparative study of many self-organizing phenomena occurring in magnetohydrodynamics and kinetic plasmas, we came up with a hypothetical grand view of self-organization. This assertion is confirmed by a recent computer simulation for a broader science field, specifically, the structure formation of short polymer chains, where the nature of the interaction is completely different from that of plasmas. It is found that the formation of the global orientation order proceeds stepwise. (author)
Pentacoordinated organoaluminum complexes: A computational insight
Milione, Stefano
2012-12-24
The geometry and the electronic structure of a series of organometallic pentacoordinated aluminum complexes bearing tri- or tetradentate N,O-based ligands have been investigated with theoretical methods. The BP86, B3LYP, and M06 functionals reproduce with low accuracy the geometry of the selected complexes. The worst result was obtained for the complex bearing a Schiff base ligand with a pendant donor arm, aeimpAlMe2 (aeimp = N-2-(dimethylamino)ethyl-(3,5-di-tert-butyl)salicylaldimine). In particular, the Al-Namine bond distance was unacceptably overestimated. This failure suggests a reasonably flat potential energy surface with respect to Al-N elongation, indicating a weak interaction with probably a strong component of dispersion forces. MP2 and M06-2X methods led to an acceptable value for the same Al-N distance. Better results were obtained with the addition of the dispersion correction to the hybrid B3LYP functional (B3LYP-D). Natural bond orbital analysis revealed that the contribution of the d orbital to the bonding is very small, in agreement with several previous studies of hypervalent molecules. The donation of electronic charge from the ligand to metal mainly consists in the interactions of the lone pairs on the donor atoms of the ligands with the s and p valence orbitals of the aluminum. The covalent bonding of the Al with the coordinated ligand is weak, and the interactions between Al and the coordinated ligands are largely ionic. To further explore the geometrical and electronic factors affecting the formation of these pentacoordianted aluminum complexes, we considered the tetracoordinated complex impAlMe2 (imp = N-isopropyl-(3,5-di-tert-butyl)salicylaldimine)), analogous to aeimpAlMe 2, and we investigated the potential energy surface around the aluminum atom corresponding to the approach of NMe3 to the metal center. At the MP2/6-31G(d) level of theory, a weak attraction was revealed only when NMe3 heads toward the metal center through the directions
Pentacoordinated organoaluminum complexes: A computational insight
Milione, Stefano; Milano, Giuseppe; Cavallo, Luigi
2012-01-01
The geometry and the electronic structure of a series of organometallic pentacoordinated aluminum complexes bearing tri- or tetradentate N,O-based ligands have been investigated with theoretical methods. The BP86, B3LYP, and M06 functionals reproduce with low accuracy the geometry of the selected complexes. The worst result was obtained for the complex bearing a Schiff base ligand with a pendant donor arm, aeimpAlMe2 (aeimp = N-2-(dimethylamino)ethyl-(3,5-di-tert-butyl)salicylaldimine). In particular, the Al-Namine bond distance was unacceptably overestimated. This failure suggests a reasonably flat potential energy surface with respect to Al-N elongation, indicating a weak interaction with probably a strong component of dispersion forces. MP2 and M06-2X methods led to an acceptable value for the same Al-N distance. Better results were obtained with the addition of the dispersion correction to the hybrid B3LYP functional (B3LYP-D). Natural bond orbital analysis revealed that the contribution of the d orbital to the bonding is very small, in agreement with several previous studies of hypervalent molecules. The donation of electronic charge from the ligand to metal mainly consists in the interactions of the lone pairs on the donor atoms of the ligands with the s and p valence orbitals of the aluminum. The covalent bonding of the Al with the coordinated ligand is weak, and the interactions between Al and the coordinated ligands are largely ionic. To further explore the geometrical and electronic factors affecting the formation of these pentacoordianted aluminum complexes, we considered the tetracoordinated complex impAlMe2 (imp = N-isopropyl-(3,5-di-tert-butyl)salicylaldimine)), analogous to aeimpAlMe 2, and we investigated the potential energy surface around the aluminum atom corresponding to the approach of NMe3 to the metal center. At the MP2/6-31G(d) level of theory, a weak attraction was revealed only when NMe3 heads toward the metal center through the directions
Large-scale computing techniques for complex system simulations
Dubitzky, Werner; Schott, Bernard
2012-01-01
Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and
Computations, Complexity, Experiments, and the World Outside Physics
Kadanoff, L.P
2009-01-01
Computer Models in the Sciences and Social Sciences. 1. Simulation and Prediction in Complex Systems: the Good the Bad and the Awful. This lecture deals with the history of large-scale computer modeling mostly in the context of the U.S. Department of Energy's sponsorship of modeling for weapons development and innovation in energy sources. 2. Complexity: Making a Splash-Breaking a Neck - The Making of Complexity in Physical System. For ages thinkers have been asking how complexity arise. The laws of physics are very simple. How come we are so complex? This lecture tries to approach this question by asking how complexity arises in physical fluids. 3. Forrester, et. al. Social and Biological Model-Making The partial collapse of the world's economy has raised the question of whether we could improve the performance of economic and social systems by a major effort on creating understanding via large-scale computer models. (author)
Computer simulations of dendrimer-polyelectrolyte complexes.
Pandav, Gunja; Ganesan, Venkat
2014-08-28
We carry out a systematic analysis of static properties of the clusters formed by complexation between charged dendrimers and linear polyelectrolyte (LPE) chains in a dilute solution under good solvent conditions. We use single chain in mean-field simulations and analyze the structure of the clusters through radial distribution functions of the dendrimer, cluster size, and charge distributions. The effects of LPE length, charge ratio between LPE and dendrimer, the influence of salt concentration, and the dendrimer generation number are examined. Systems with short LPEs showed a reduced propensity for aggregation with dendrimers, leading to formation of smaller clusters. In contrast, larger dendrimers and longer LPEs lead to larger clusters with significant bridging. Increasing salt concentration was seen to reduce aggregation between dendrimers as a result of screening of electrostatic interactions. Generally, maximum complexation was observed in systems with an equal amount of net dendrimer and LPE charges, whereas either excess LPE or dendrimer concentrations resulted in reduced clustering between dendrimers.
Introduction to the LaRC central scientific computing complex
Shoosmith, John N.
1993-01-01
The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.
Applications of Computer Technology in Complex Craniofacial Reconstruction
Kristopher M. Day, MD
2018-03-01
Conclusion:. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.
Characterizations and computational complexity of systolic trellis automata
Ibarra, O H; Kim, S M
1984-03-01
Systolic trellis automata are simple models for VLSI. The authors characterize the computing power of these models in terms of turing machines. The characterizations are useful in proving new results as well as giving simpler proofs of known results. They also derive lower and upper bounds on the computational complexity of the models. 18 references.
Computer aided operation of complex systems
Goodstein, L.P.
1985-09-01
Advanced technology is having the effect that industrial systems are becoming more highly automated and do not rely on human intervention for the control of normally planned and/or predicted situations. Thus the importance of the operator has shifted from being a manual controller to becoming more of a systems manager and supervisory controller. At the same time, the use of advanced information technology in the control room and its potential impact on human-machine capabilities places additional demands on the designer. This report deals with work carried out to describe the plant-operator relationship in order to systematize the design and evaluation of suitable information systems in the control room. This design process starts with the control requirements from the plant and transforms them into corresponding sets of decision-making tasks with appropriate allocation of responsibilities between computer and operator. To further effectivize this cooperation, appropriate information display and accession are identified. The conceptual work has been supported by experimental studies on a small-scale simulator. (author)
Computational complexity of the landscape II-Cosmological considerations
Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire
2018-05-01
We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.
High-power plasma dynamic systems of quasi-stationary type in IPP NSK KIPT: results and prospects
Solyakov, D.G.
2015-01-01
This paper is devoted to brief review of main experimental results of investigations of high-power quasi-stationary plasma dynamic systems in the IPP NSC KIPT. In experiments were shown that to received accelerated plasma streams with high value of energy in quasi-stationary modes all conditions on the accelerating channel boundary should be controlled independently. As a results of optimizations of the modes of operation all QSPA active elements quasi-stationary plasma flow in the channel during 480 μs at discharge durations 550μs was obtained. The plasma streams velocity was close to theoretical limit for present experimental conditions. Plasma streams with maximum velocity up to 4.2 · 10 7 cm/s and total value of energy containment in the stream 0.4...0.6 MJ were received. The main properties of compression zone formation in the plasma streams generated by magneto-plasma compressor in quasi-stationary modes were investigated. In experiments were shown that initial conditions, namely residual pressure in the vacuum chamber made a big influence on the value of plasma density in compression zone. Compressive plasma streams with density (2...4)·10 18 cm -3 during 20...25μs at discharge duration 10μs were obtained. This value of plasma density is close to theoretical limit for present experimental conditions
ANS main control complex three-dimensional computer model development
Cleaves, J.E.; Fletcher, W.M.
1993-01-01
A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use
Complexity estimates based on integral transforms induced by computational units
Kůrková, Věra
2012-01-01
Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012
Complexity vs energy: theory of computation and theoretical physics
Manin, Y I
2014-01-01
This paper is a survey based upon the talk at the satellite QQQ conference to ECM6, 3Quantum: Algebra Geometry Information, Tallinn, July 2012. It is dedicated to the analogy between the notions of complexity in theoretical computer science and energy in physics. This analogy is not metaphorical: I describe three precise mathematical contexts, suggested recently, in which mathematics related to (un)computability is inspired by and to a degree reproduces formalisms of statistical physics and quantum field theory.
Statistical screening of input variables in a complex computer code
Krieger, T.J.
1982-01-01
A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results
Automated System for Teaching Computational Complexity of Algorithms Course
Vadim S. Roublev
2017-01-01
Full Text Available This article describes problems of designing automated teaching system for “Computational complexity of algorithms” course. This system should provide students with means to familiarize themselves with complex mathematical apparatus and improve their mathematical thinking in the respective area. The article introduces the technique of algorithms symbol scroll table that allows estimating lower and upper bounds of computational complexity. Further, we introduce a set of theorems that facilitate the analysis in cases when the integer rounding of algorithm parameters is involved and when analyzing the complexity of a sum. At the end, the article introduces a normal system of symbol transformations that allows one both to perform any symbol transformations and simplifies the automated validation of such transformations. The article is published in the authors’ wording.
Exponential rise of dynamical complexity in quantum computing through projections.
Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya
2014-10-10
The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.
Computer tomography in complex diagnosis of laryngeal cancer
Savin, A.A.
1999-01-01
To specify the role of computer tomography in the diagnosis of malignant of the larynx. Forty-two patients with suspected laryngeal tumors were examined: 38 men and 4 women aged 41-68 years. X-ray examinations included traditional immediate tomography of the larynx. Main X-ray and computer tomographic symptoms of laryngeal tumors of different localizations are described. It is shown that the use of computer tomography in complex diagnosis of laryngeal cancer permits an objective assessment of the tumor, its structure and dissemination, and of the regional lymph nodes [ru
Complex system modelling and control through intelligent soft computations
Azar, Ahmad
2015-01-01
The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...
Computed tomography of von Meyenburg complex simulating micro-abscesses
Sada, P.N.; Ramakrishna, B.
1994-01-01
A case is presented of a bile duct hamartoma in a 44 year old man being evaluated for abdominal pain. The computed tomography (CT) findings suggested micro-abscesses in the liver and a CT guided tru-cut biopsy showed von Meyenburg complex. 9 refs., 3 figs
Wireless Mobile Computing and its Links to Descriptive Complexity
Wiedermann, Jiří; Pardubská, D.
2008-01-01
Roč. 19, č. 4 (2008), s. 887-913 ISSN 0129-0541 R&D Projects: GA AV ČR 1ET100300517 Institutional research plan: CEZ:AV0Z10300504 Keywords : alternating Turing machine * simulation * simultaneous time-space complexity * wireless parallel Turing machine Subject RIV: IN - Informatics, Computer Science Impact factor: 0.554, year: 2008
Development of Onboard Computer Complex for Russian Segment of ISS
Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.
1998-01-01
Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.
Complex systems relationships between control, communications and computing
2016-01-01
This book gives a wide-ranging description of the many facets of complex dynamic networks and systems within an infrastructure provided by integrated control and supervision: envisioning, design, experimental exploration, and implementation. The theoretical contributions and the case studies presented can reach control goals beyond those of stabilization and output regulation or even of adaptive control. Reporting on work of the Control of Complex Systems (COSY) research program, Complex Systems follows from and expands upon an earlier collection: Control of Complex Systems by introducing novel theoretical techniques for hard-to-control networks and systems. The major common feature of all the superficially diverse contributions encompassed by this book is that of spotting and exploiting possible areas of mutual reinforcement between control, computing and communications. These help readers to achieve not only robust stable plant system operation but also properties such as collective adaptivity, integrity an...
Low Computational Complexity Network Coding For Mobile Networks
Heide, Janus
2012-01-01
Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...
Computer modeling of properties of complex molecular systems
Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)
2015-03-10
Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.
Current topics in pure and computational complex analysis
Dorff, Michael; Lahiri, Indrajit
2014-01-01
The book contains 13 articles, some of which are survey articles and others research papers. Written by eminent mathematicians, these articles were presented at the International Workshop on Complex Analysis and Its Applications held at Walchand College of Engineering, Sangli. All the contributing authors are actively engaged in research fields related to the topic of the book. The workshop offered a comprehensive exposition of the recent developments in geometric functions theory, planar harmonic mappings, entire and meromorphic functions and their applications, both theoretical and computational. The recent developments in complex analysis and its applications play a crucial role in research in many disciplines.
Molecular computing towards a novel computing architecture for complex problem solving
Chang, Weng-Long
2014-01-01
This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...
Analyzing the Implicit Computational Complexity of object-oriented programs
Marion , Jean-Yves; Péchoux , Romain
2008-01-01
International audience; A sup-interpretation is a tool which provides upper bounds on the size of the values computed by the function symbols of a program. Sup-interpretations have shown their interest to deal with the complexity of first order functional programs. This paper is an attempt to adapt the framework of sup-interpretations to a fragment of object-oriented programs, including loop and while constructs and methods with side effects. We give a criterion, called brotherly criterion, w...
Computational complexity of symbolic dynamics at the onset of chaos
Lakdawala, Porus
1996-05-01
In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behavior of cellular automata, that the computational basis for modeling this region is the universal Turing machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.
Single photon emission computed tomography in AIDS dementia complex
Pohl, P.; Vogl, G.; Fill, H.; Roessler, H.Z.; Zangerle, R.; Gerstenbrand, F.
1988-01-01
Single photon emission computed tomography (SPECT) studies were performed in AIDS dementia complex using IMP in 12 patients (and HM-PAO in four of these same patients). In all patients, SPECT revealed either multiple or focal uptake defects, the latter corresponding with focal signs or symptoms in all but one case. Computerized tomography showed a diffuse cerebral atrophy in eight of 12 patients, magnetic resonance imaging exhibited changes like atrophy and/or leukoencephalopathy in two of five cases. Our data indicate that both disturbance of cerebral amine metabolism and alteration of local perfusion share in the pathogenesis of AIDS dementia complex. SPECT is an important aid in the diagnosis of AIDS dementia complex and contributes to the understanding of the pathophysiological mechanisms of this disorder
Modeling Cu{sup 2+}-Aβ complexes from computational approaches
Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)
2015-09-15
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
Complex network problems in physics, computer science and biology
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe
Computation of 3D form factors in complex environments
Coulon, N.
1989-01-01
The calculation of radiant interchange among opaque surfaces in a complex environment poses the general problem of determining the visible and hidden parts of the environment. In many thermal engineering applications, surfaces are separated by radiatively non-participating media and may be idealized as diffuse emitters and reflectors. Consenquently the net radiant energy fluxes are intimately related to purely geometrical quantities called form factors, that take into account hidden parts: the problem is reduced to the form factor evaluation. This paper presents the method developed for the computation of 3D form factors in the finite-element module of the system TRIO, which is a general computer code for thermal and fluid flow analysis. The method is derived from an algorithm devised for synthetic image generation. A comparison is performed with the standard contour integration method also implemented and suited to convex geometries. Several illustrative examples of finite-element thermal calculations in radiating enclosures are given
The complexity of computing the MCD-estimator
Bernholt, T.; Fischer, Paul
2004-01-01
In modem statistics the robust estimation of parameters is a central problem, i.e., an estimation that is not or only slightly affected by outliers in the data. The minimum covariance determinant (MCD) estimator (J. Amer. Statist. Assoc. 79 (1984) 871) is probably one of the most important robust...... estimators of location and scatter. The complexity of computing the MCD, however, was unknown and generally thought to be exponential even if the dimensionality of the data is fixed. Here we present a polynomial time algorithm for MCD for fixed dimension of the data. In contrast we show that computing...... the MCD-estimator is NP-hard if the dimension varies. (C) 2004 Elsevier B.V. All rights reserved....
A design of a computer complex including vector processors
Asai, Kiyoshi
1982-12-01
We, members of the Computing Center, Japan Atomic Energy Research Institute have been engaged for these six years in the research of adaptability of vector processing to large-scale nuclear codes. The research has been done in collaboration with researchers and engineers of JAERI and a computer manufacturer. In this research, forty large-scale nuclear codes were investigated from the viewpoint of vectorization. Among them, twenty-six codes were actually vectorized and executed. As the results of the investigation, it is now estimated that about seventy percents of nuclear codes and seventy percents of our total amount of CPU time of JAERI are highly vectorizable. Based on the data obtained by the investigation, (1)currently vectorizable CPU time, (2)necessary number of vector processors, (3)necessary manpower for vectorization of nuclear codes, (4)computing speed, memory size, number of parallel 1/0 paths, size and speed of 1/0 buffer of vector processor suitable for our applications, (5)necessary software and operational policy for use of vector processors are discussed, and finally (6)a computer complex including vector processors is presented in this report. (author)
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Multiscale modeling of complex materials phenomenological, theoretical and computational aspects
Trovalusci, Patrizia
2014-01-01
The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.
Stochastic equations for complex systems theoretical and computational topics
Bessaih, Hakima
2015-01-01
Mathematical analyses and computational predictions of the behavior of complex systems are needed to effectively deal with weather and climate predictions, for example, and the optimal design of technical processes. Given the random nature of such systems and the recognized relevance of randomness, the equations used to describe such systems usually need to involve stochastics. The basic goal of this book is to introduce the mathematics and application of stochastic equations used for the modeling of complex systems. A first focus is on the introduction to different topics in mathematical analysis. A second focus is on the application of mathematical tools to the analysis of stochastic equations. A third focus is on the development and application of stochastic methods to simulate turbulent flows as seen in reality. This book is primarily oriented towards mathematics and engineering PhD students, young and experienced researchers, and professionals working in the area of stochastic differential equations ...
Exact complexity: The spectral decomposition of intrinsic computation
Crutchfield, James P.; Ellison, Christopher J.; Riechers, Paul M.
2016-01-01
We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ϵ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography. - Highlights: • We provide exact, closed-form expressions for a hidden stationary process' intrinsic computation. • These include information measures such as the excess entropy, transient information, and synchronization information and the entropy-rate finite-length approximations. • The method uses an epsilon-machine's mixed-state presentation. • The spectral decomposition of the mixed-state presentation relies on the recent development of meromorphic functional calculus for nondiagonalizable operators.
Coherence and computational complexity of quantifier-free dependence logic formulas
Kontinen, J.; Kontinen, J.; Väänänen, J.
2010-01-01
We study the computational complexity of the model checking for quantifier-free dependence logic (D) formulas. We point out three thresholds in the computational complexity: logarithmic space, non- deterministic logarithmic space and non-deterministic polynomial time.
Computer simulations of discharges from a lignite power plant complex
Koukouliou, V.; Horyna, J.; Perez-Sanchez, D.
2008-01-01
This paper describes work carried out within the IAEA EMRAS program NORM working group to test the predictions of three computer models against measured radionuclide concentrations resulting from discharges from a lignite power plant complex. This complex consists of two power plants with a total of five discharge stacks, situated approximately 2-5 kilometres from a city of approximately 10,000 inhabitants. Monthly measurements of mean wind speed and direction, dust loading, and 238 U activities in fallout samples, as well as mean annual values of 232 Th activity in the nearest city sampling sites were available for the study. The models used in the study were Pc-CREAM (a detailed impact assessment model), and COMPLY and CROM (screening models). In applying the models to this scenario it was noted that the meteorological data provided was not ideal for testing, and that a number of assumptions had to be made, particularly for the simpler models. However, taking the gaps and uncertainties in the data into account, the model predictions from PC-CREAM were generally in good agreement with the measured data, and the results from different models were also generally consistent with each other. However, the COMPLY predictions were generally lower than those from PC-CREAM. This is of concern, as the aim of a screening model (COMPLY) is to provide conservative estimates of contaminant concentrations. Further investigation of this problem is required. The general implications of the results for further model development are discussed. (author)
Recent Developments in Complex Analysis and Computer Algebra
Kajiwara, Joji; Xu, Yongzhi
1999-01-01
This volume consists of papers presented in the special sessions on "Complex and Numerical Analysis", "Value Distribution Theory and Complex Domains", and "Use of Symbolic Computation in Mathematics Education" of the ISAAC'97 Congress held at the University of Delaware, during June 2-7, 1997. The ISAAC Congress coincided with a U.S.-Japan Seminar also held at the University of Delaware. The latter was supported by the National Science Foundation through Grant INT-9603029 and the Japan Society for the Promotion of Science through Grant MTCS-134. It was natural that the participants of both meetings should interact and consequently several persons attending the Congress also presented papers in the Seminar. The success of the ISAAC Congress and the U.S.-Japan Seminar has led to the ISAAC'99 Congress being held in Fukuoka, Japan during August 1999. Many of the same participants will return to this Seminar. Indeed, it appears that the spirit of the U.S.-Japan Seminar will be continued every second year as part of...
Choi, Eunsong
Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We
Turing’s algorithmic lens: From computability to complexity theory
Díaz, Josep
2013-12-01
Full Text Available The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time and NP (problems solvable in non-deterministic polynomial time were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.La cuestión de la decidibilidad, es decir, si es posible demostrar computacionalmente que una expresión matemática es verdadera o falsa, fue planteada por Hilbert y permaneció abierta hasta que Turing la respondió de forma negativa. Establecida la no-decidibilidad de las matemáticas, los esfuerzos en informática teórica se centraron en el estudio de la complejidad computacional de los problemas decidibles. En este artículo presentamos una breve introducción a las clases P (problemas resolubles en tiempo polinómico y NP (problemas resolubles de manera no determinista en tiempo polinómico, al tiempo que exponemos la dificultad de establecer si P = NP y las consecuencias que se derivarían de que ambas clases de problemas fueran iguales. Esta cuestión tiene implicaciones no solo en los campos de la informática, las matemáticas y la física, sino también para la biología, la sociología y la economía. La idea seminal del estudio de la complejidad computacional es consecuencia directa del modo en que Turing abordaba problemas en diferentes ámbitos mediante lo
kipt Terminal Aerodrome Forecast
National Oceanic and Atmospheric Administration, Department of Commerce — TAF (terminal aerodrome forecast or terminal area forecast) is a format for reporting weather forecast information, particularly as it relates to aviation. TAFs are...
Krylov, V.A.; Pisarenko, V.P.
1982-01-01
Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).
Integrated modeling tool for performance engineering of complex computer systems
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Assessment of the Stylohyoid Complex with Cone Beam Computed Tomography
İlgüy, Dilhan; İlgüy, Mehmet; Fişekçioğlu, Erdoğan; Dölekoğlu, Semanur [Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Yeditepe University, Istanbul (Turkey)
2012-12-27
Orientation of the stylohyoid complex (SHC) may be important for evaluation of the patient with orofacial pain or dysphagia. Our purpose was to assess the length and angulations of SHC using cone beam computed tomography (CBCT). In this study, 3D images provided by CBCT of 69 patients (36 females, 33 males, age range 15-77 years) were retrospectively evaluated. All CBCT images were performed because of other indications. None of the patients had symptoms of ossified SHC. The length and the thickness of SHC ossification, the anteroposterior angle (APA) and the mediolateral angle (MLA) were measured by maxillofacial radiologists on the anteroposterior, right lateral and left lateral views of CBCT. Student’s t test, Pearson's correlation and Chi-square test tests were used for statistical analysis. According to the results, the mean length of SHC was 25.3 ± 11.3 mm and the mean thickness of SHC was 4.8 ± 1.8 mm in the study group. The mean APA value of SHCs was 25.6° ± 5.4° and the mean MLA value was 66.4° ± 6.7°. A positive correlation coefficient was found between age and APA (r = 0.335; P < 0.01); between thickness and APA (r = 0.448; P < 0.01) and also between length and thickness was found (r=0.236). The size and morphology of the SHC can be easily assessed by 3D views provided by CBCT. In CBCT evaluation of the head and neck region, the radiologist should consider SHC according to these variations, which may have clinical importance.
Assessment of the Stylohyoid Complex with Cone Beam Computed Tomography
İlgüy, Dilhan; İlgüy, Mehmet; Fişekçioğlu, Erdoğan; Dölekoğlu, Semanur
2012-01-01
Orientation of the stylohyoid complex (SHC) may be important for evaluation of the patient with orofacial pain or dysphagia. Our purpose was to assess the length and angulations of SHC using cone beam computed tomography (CBCT). In this study, 3D images provided by CBCT of 69 patients (36 females, 33 males, age range 15-77 years) were retrospectively evaluated. All CBCT images were performed because of other indications. None of the patients had symptoms of ossified SHC. The length and the thickness of SHC ossification, the anteroposterior angle (APA) and the mediolateral angle (MLA) were measured by maxillofacial radiologists on the anteroposterior, right lateral and left lateral views of CBCT. Student’s t test, Pearson's correlation and Chi-square test tests were used for statistical analysis. According to the results, the mean length of SHC was 25.3 ± 11.3 mm and the mean thickness of SHC was 4.8 ± 1.8 mm in the study group. The mean APA value of SHCs was 25.6° ± 5.4° and the mean MLA value was 66.4° ± 6.7°. A positive correlation coefficient was found between age and APA (r = 0.335; P < 0.01); between thickness and APA (r = 0.448; P < 0.01) and also between length and thickness was found (r=0.236). The size and morphology of the SHC can be easily assessed by 3D views provided by CBCT. In CBCT evaluation of the head and neck region, the radiologist should consider SHC according to these variations, which may have clinical importance
GAM-HEAT -- a computer code to compute heat transfer in complex enclosures
Cooper, R.E.; Taylor, J.R.; Kielpinski, A.L.; Steimke, J.L.
1991-02-01
The GAM-HEAT code was developed for heat transfer analyses associated with postulated Double Ended Guillotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re- radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices, and also accounts for convective, conductive, and advective heat exchanges. The code is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium. The GAM-HEAT code has been exercised extensively for computing transient temperatures in SRS reactors with specific charges and control components. Results from these computations have been used to establish the need for and to evaluate hardware modifications designed to mitigate results of postulated accident scenarios, and to assist in the specification of safe reactor operating power limits. The code utilizes temperature dependence on material properties. The efficiency of the code has been enhanced by the use of an iterative equation solver. Verification of the code to date consists of comparisons with parallel efforts at Los Alamos National Laboratory and with similar efforts at Westinghouse Science and Technology Center in Pittsburgh, PA, and benchmarked using problems with known analytical or iterated solutions. All comparisons and tests yield results that indicate the GAM-HEAT code performs as intended
Effects of complex feedback on computer-assisted modular instruction
Gordijn, Jan; Nijhof, W.J.
2002-01-01
The aim of this study is to determine the effects of two versions of Computer-Based Feedback within a prevocational system of modularized education in The Netherlands. The implementation and integration of Computer-Based Feedback (CBF) in Installation Technology modules in all schools (n=60) in The
Complex fluids in biological systems experiment, theory, and computation
2015-01-01
This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements. With ever-increasing capabilities in the visualization and manipulation of biological systems, research on the fundamental phenomena, models, measurements, and analysis of complex fluids has taken a number of exciting directions. In this book, many of the world’s foremost experts explore key topics such as: Macro- and micro-rheological techniques for measuring the material properties of complex biofluids and the subtleties of data interpretation Experimental observations and rheology of complex biological materials, including mucus, cell membranes, the cytoskeleton, and blood The motility of microorganisms in complex fluids and the dynamics of active suspensions Challenges and solut...
Berland, Matthew; Wilensky, Uri
2015-01-01
Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…
2012-08-22
... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...
Fast algorithm for computing complex number-theoretic transforms
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
computation of the stability constants for the inclusion complexes of
PrF MU
stability constants of ADA with β-CD with the use of MO as an auxiliary agent were evaluated. ..... The latter statement can also be strengthened by the computed species distribution diagram for .... and 5-position of the adamantyl body.
Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention
Koning, L. de; Maanen, P.P. van; Dongen, K. van
2008-01-01
Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were
A new entropy based method for computing software structural complexity
Roca, J L
2002-01-01
In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relation...
Low-complexity computer simulation of multichannel room impulse responses
Martínez Castañeda, J.A.
2013-01-01
The "telephone'' model has been, for the last one hundred thirty years, the base of modern telecommunications with virtually no changes in its fundamental concept. The arise of smaller and more powerful computing devices have opened new possibilities. For example, to build systems able to give to
Complex adaptative systems and computational simulation in Archaeology
Salvador Pardo-Gordó
2017-07-01
Full Text Available Traditionally the concept of ‘complexity’ is used as a synonym for ‘complex society’, i.e., human groups with characteristics such as urbanism, inequalities, and hierarchy. The introduction of Nonlinear Systems and Complex Adaptive Systems to the discipline of archaeology has nuanced this concept. This theoretical turn has led to the rise of modelling as a method of analysis of historical processes. This work has a twofold objective: to present the theoretical current characterized by generative thinking in archaeology and to present a concrete application of agent-based modelling to an archaeological problem: the dispersal of the first ceramic production in the western Mediterranean.
General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
Šíma, Jiří; Orponen, P.
2003-01-01
Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003
Complexity theory and genetics: The computational power of crossing over
Pudlák, Pavel
2001-01-01
Roč. 171, č. 1 (2001), s. 201-223 ISSN 0890-5401 R&D Projects: GA AV ČR IAA1019901 Institutional research plan: CEZ:AV0Z1019905; CEZ:AV0Z1019905 Keywords : complexity * genetics * croning over Subject RIV: BA - General Mathematics Impact factor: 0.571, year: 2001
Computed tomography in complex fractures of the ankle joint
Friedburg, H.; Wimmer, B.; Hendrich, V.; Riede, U.N.
1983-01-01
Diagnostic value of conventional roentgen technique and computed tomography is proofed by examination of 50 patients with sprain fractures of the ankle joint. The dimension of destruction of the distal tibial joint surface is better documented by CT than by other radiological techniques. Additional informations like multifragmentation of the distal tibia or evaluation of reposition impedigment are found more frequently by CT. Therefore indication and planning of the traumatherapy can be assessed better by the traumatologist. (orig.)
Computed tomography in complex fractures of the ankle joint
Friedburg, H.; Wimmer, B.; Hendrich, V.; Riede, U.N.
1983-09-01
Diagnostic value of conventional roentgen technique and computed tomography is proofed by examination of 50 patients with sprain fractures of the ankle joint. The dimension of destruction of the distal tibial joint surface is better documented by CT than by other radiological techniques. Additional informations like multiframentation of the distal tibia or evaluation of reposition impediment are found more frequently by CT. Therefore indication and planning of the traumatherapy can be assessed better by the traumatologist.
Computational design of RNAs with complex energy landscapes.
Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F
2013-12-01
RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. Copyright © 2013 Wiley Periodicals, Inc.
Zinin, A.I.; Kolesov, V.E.; Nevinitsa, A.I.
1975-01-01
The report contains description of the method of construction of computer programs complexes for computation purposes for M-220 computers using the ALGOL-60 code for programming. The complex is organised on the modulus system principle and can include substantial number of modulus programs. The information exchange between separate moduli is done by means of special interpreting program and the information unit exchanged is a specially arranged file of data. For addressing to the interpreting program in the ALGOL-60 frameworks small number of specially created procedure-codes is used. The method proposed gives possibilities to program separate moduli of the complex independently and to expand the complex if necessary. In this case separate moduli or groups of moduli depending on the method of segmentation of the general problem solved by the complex will be of the independent interest and could be used out of the complex as traditional programs. (author)
Design of magnetic coordination complexes for quantum computing.
Aromí, Guillem; Aguilà, David; Gamez, Patrick; Luis, Fernando; Roubeau, Olivier
2012-01-21
A very exciting prospect in coordination chemistry is to manipulate spins within magnetic complexes for the realization of quantum logic operations. An introduction to the requirements for a paramagnetic molecule to act as a 2-qubit quantum gate is provided in this tutorial review. We propose synthetic methods aimed at accessing such type of functional molecules, based on ligand design and inorganic synthesis. Two strategies are presented: (i) the first consists in targeting molecules containing a pair of well-defined and weakly coupled paramagnetic metal aggregates, each acting as a carrier of one potential qubit, (ii) the second is the design of dinuclear complexes of anisotropic metal ions, exhibiting dissimilar environments and feeble magnetic coupling. The first systems obtained from this synthetic program are presented here and their properties are discussed.
A computational framework for modeling targets as complex adaptive systems
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
A new entropy based method for computing software structural complexity
Roca, Jose L.
2002-01-01
In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relationship with the number of inherent software errors and it implies a basic hazard failure rate for it, so that a minimum structure assures a certain stability and maturity of the program. This metric can be used, either to evaluate the product or the process of software development, as development tool or for monitoring the stability and the quality of the final product. (author)
Fusion in computer vision understanding complex visual content
Ionescu, Bogdan; Piatrik, Tomas
2014-01-01
This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo
Computer tomography of large dust clouds in complex plasmas
Killer, Carsten; Himpel, Michael; Melzer, André
2014-01-01
The dust density is a central parameter of a dusty plasma. Here, a tomography setup for the determination of the three-dimensionally resolved density distribution of spatially extended dust clouds is presented. The dust clouds consist of micron-sized particles confined in a radio frequency argon plasma, where they fill almost the entire discharge volume. First, a line-of-sight integrated dust density is obtained from extinction measurements, where the incident light from an LED panel is scattered and absorbed by the dust. Performing these extinction measurements from many different angles allows the reconstruction of the 3D dust density distribution, analogous to a computer tomography in medical applications
Building confidence and credibility amid growing model and computing complexity
Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.
2017-12-01
As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.
Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations
Khan, Junaid Ali; Raja, Muhammad Asif Zahoor; Qureshi, Ijaz Mansoor
2011-01-01
We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed. (general)
The pervasive reach of resource-bounded Kolmogorov complexity in computational complexity theory
Allender, E.; Koucký, Michal; Ronneburger, D.; Roy, S.
2011-01-01
Roč. 77, č. 1 (2011), s. 14-40 ISSN 0022-0000 R&D Projects: GA ČR GAP202/10/0854; GA MŠk(CZ) 1M0545; GA AV ČR IAA100190902 Institutional research plan: CEZ:AV0Z10190503 Keywords : Circuit complexity * Distinguishing complexity * FewEXP * Formula size * Kolmogorov complexity Subject RIV: BA - General Mathematics Impact factor: 1.157, year: 2011 http://www.sciencedirect.com/science/article/pii/S0022000010000887
On Complex Networks Representation and Computation of Hydrologycal Quantities
Serafin, F.; Bancheri, M.; David, O.; Rigon, R.
2017-12-01
Water is our blue gold. Despite results of discovery-based science keep warning public opinion about the looming worldwide water crisis, water is still treated as a not worth taking resource. Could a different multi-scale perspective affect environmental decision-making more deeply? Can also a further pairing to a new graphical representation of processes interaction sway decision-making more effectively and public opinion consequently?This abstract introduces a complex networks driven way to represent catchments eco-hydrology and related flexible informatics to manage it. The representation is built upon mathematical category. A category is an algebraic structure that comprises "objects" linked by "arrows". It is an evolution of Petri Nets said Time Continuous Petri Nets (TCPN). It aims to display (water) budgets processes and catchment interactions using explicative and self-contained symbolism. The result improves readability of physical processes compared to current descriptions. The IT perspective hinges on the Object Modeling System (OMS) v3. The latter is a non-invasive flexible environmental modeling framework designed to support component-based model development. The implementation of a Directed Acyclic Graph (DAG) data structure, named Net3, has recently enhanced its flexibility. Net3 represents interacting systems as complex networks: vertices match up with any sort of time evolving quantity; edges correspond to their data (fluxes) interchange. It currently hosts JGrass-NewAge components, and those implementing travel time analysis of fluxes. Further bio-physical or management oriented components can be easily added.This talk introduces both graphical representation and related informatics exercising actual applications and examples.
Computational RNA secondary structure design: empirical complexity and improved methods
Condon Anne
2007-01-01
Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.
Computational Analyses of Complex Flows with Chemical Reactions
Bae, Kang-Sik
The heat and mass transfer phenomena in micro-scale for the mass transfer phenomena on drug in cylindrical matrix system, the simulation of oxygen/drug diffusion in a three dimensional capillary network, and a reduced chemical kinetic modeling of gas turbine combustion for Jet propellant-10 have been studied numerically. For the numerical analysis of the mass transfer phenomena on drug in cylindrical matrix system, the governing equations are derived from the cylindrical matrix systems, Krogh cylinder model, which modeling system is comprised of a capillary to a surrounding cylinder tissue along with the arterial distance to veins. ADI (Alternative Direction Implicit) scheme and Thomas algorithm are applied to solve the nonlinear partial differential equations (PDEs). This study shows that the important factors which have an effect on the drug penetration depth to the tissue are the mass diffusivity and the consumption of relevant species during the time allowed for diffusion to the brain tissue. Also, a computational fluid dynamics (CFD) model has been developed to simulate the blood flow and oxygen/drug diffusion in a three dimensional capillary network, which are satisfied in the physiological range of a typical capillary. A three dimensional geometry has been constructed to replicate the one studied by Secomb et al. (2000), and the computational framework features a non-Newtonian viscosity model for blood, the oxygen transport model including in oxygen-hemoglobin dissociation and wall flux due to tissue absorption, as well as an ability to study the diffusion of drugs and other materials in the capillary streams. Finally, a chemical kinetic mechanism of JP-10 has been compiled and validated for a wide range of combustion regimes, covering pressures of 1atm to 40atm with temperature ranges of 1,200 K--1,700 K, which is being studied as a possible Jet propellant for the Pulse Detonation Engine (PDE) and other high-speed flight applications such as hypersonic
A general method for computing the total solar radiation force on complex spacecraft structures
Chan, F. K.
1981-01-01
The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.
Zimakov, V.N.; Chernykh, V.P.
1998-01-01
The problems on creating calculational-simulating (CSC) and their application by developing the program and program-technical means for computer-aided process control systems at NPP are considered. The abo- ve complex is based on the all-mode real time mathematical model, functioning at a special complex of computerized means
Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes
Kepp, Kasper Planeta; Cirera, J
2009-01-01
Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...
Study of application technology of ultra-high speed computer to the elucidation of complex phenomena
Sekiguchi, Tomotsugu
1996-01-01
The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)
Insight into the structures and stabilities of Tc and Re DMSA complexes: A computational study
Blanco González, Alejandro; Hernández Valdés, Daniel; García Fleitas, Ariel; Rodríguez Riera, Zalua; Jáuregui Haza, Ulises
2016-01-01
Meso-2,3-dimercaptosuccinic acid (DMSA) is used in nuclear medicine as ligand for preparation of radiopharmaceuticals for diagnostic and therapy. DMSA has been the subject of numerous investigations during the past three decades and new and significant information of the chemistry and pharmacology of DMSA complexes have emerged. In comparison to other ligands, the structure of some DMSA complexes is unclear up today. The structures and applications of DMSA complexes are strictly dependent on the chemical conditions of their preparation, especially pH and the ratio of components. A computational study of M-DMSA (M = Tc, Re) complexes has been performed using density functional theory. Different isomers for M(V) and M(III) complexes were study. The pH influence over ligand structures was taken into account and the solvent effect was evaluated using an implicit solvation model. The fully optimized complex syn-endo Re(V)-DMSA shows a geometry similar to the X-ray data and was used to validate the methodology. Moreover, new alternative structures for the renal agent 99mTc(III)-DMSA were proposed and computationally studied. For two complex structures, a larger stability respect to that proposed in the literature was obtained. Furthermore, Tc(V)-DMSA complexes are more stable than the Tc(III)-DMSA proposed structures. In general, Re complexes are more stables than the corresponding Tc ones. (author)
The Intelligent Safety System: could it introduce complex computing into CANDU shutdown systems
Hall, J.A.; Hinds, H.W.; Pensom, C.F.; Barker, C.J.; Jobse, A.H.
1984-07-01
The Intelligent Safety System is a computerized shutdown system being developed at the Chalk River Nuclear Laboratories (CRNL) for future CANDU nuclear reactors. It differs from current CANDU shutdown systems in both the algorithm used and the size and complexity of computers required to implement the concept. This paper provides an overview of the project, with emphasis on the computing aspects. Early in the project several needs leading to an introduction of computing complexity were identified, and a computing system that met these needs was conceived. The current work at CRNL centers on building a laboratory demonstration of the Intelligent Safety System, and evaluating the reliability and testability of the concept. Some fundamental problems must still be addressed for the Intelligent Safety System to be acceptable to a CANDU owner and to the regulatory authorities. These are also discussed along with a description of how the Intelligent Safety System might solve these problems
Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.
Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K
2015-05-22
Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.
Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models
Gnecco, G.; Kůrková, Věra; Sanguineti, M.
2011-01-01
Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011
ANCON: A code for the evaluation of complex fault trees in personal computers
Napoles, J.G.; Salomon, J.; Rivero, J.
1990-01-01
Performing probabilistic safety analysis has been recognized worldwide as one of the more effective ways for further enhancing safety of Nuclear Power Plants. The evaluation of fault trees plays a fundamental role in these analysis. Some existing limitations in RAM and execution speed of personal computers (PC) has restricted so far their use in the analysis of complex fault trees. Starting from new approaches in the data structure and other possibilities the ANCON code can evaluate complex fault trees in a PC, allowing the user to do a more comprehensive analysis of the considered system in reduced computing time
Berland, Matthew W.
As scientists use the tools of computational and complex systems theory to broaden science perspectives (e.g., Bar-Yam, 1997; Holland, 1995; Wolfram, 2002), so can middle-school students broaden their perspectives using appropriate tools. The goals of this dissertation project are to build, study, evaluate, and compare activities designed to foster both computational and complex systems fluencies through collaborative constructionist virtual and physical robotics. In these activities, each student builds an agent (e.g., a robot-bird) that must interact with fellow students' agents to generate a complex aggregate (e.g., a flock of robot-birds) in a participatory simulation environment (Wilensky & Stroup, 1999a). In a participatory simulation, students collaborate by acting in a common space, teaching each other, and discussing content with one another. As a result, the students improve both their computational fluency and their complex systems fluency, where fluency is defined as the ability to both consume and produce relevant content (DiSessa, 2000). To date, several systems have been designed to foster computational and complex systems fluencies through computer programming and collaborative play (e.g., Hancock, 2003; Wilensky & Stroup, 1999b); this study suggests that, by supporting the relevant fluencies through collaborative play, they become mutually reinforcing. In this work, I will present both the design of the VBOT virtual/physical constructionist robotics learning environment and a comparative study of student interaction with the virtual and physical environments across four middle-school classrooms, focusing on the contrast in systems perspectives differently afforded by the two environments. In particular, I found that while performance gains were similar overall, the physical environment supported agent perspectives on aggregate behavior, and the virtual environment supported aggregate perspectives on agent behavior. The primary research questions
Feng-Gang Yan
2018-01-01
Full Text Available A low-complexity algorithm is presented to dramatically reduce the complexity of the multiple signal classification (MUSIC algorithm for direction of arrival (DOA estimation, in which both tasks of eigenvalue decomposition (EVD and spectral search are implemented with efficient real-valued computations, leading to about 75% complexity reduction as compared to the standard MUSIC. Furthermore, the proposed technique has no dependence on array configurations and is hence suitable for arbitrary array geometries, which shows a significant implementation advantage over most state-of-the-art unitary estimators including unitary MUSIC (U-MUSIC. Numerical simulations over a wide range of scenarios are conducted to show the performance of the new technique, which demonstrates that with a significantly reduced computational complexity, the new approach is able to provide a close accuracy to the standard MUSIC.
Complex data modeling and computationally intensive methods for estimation and prediction
Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics
2015-01-01
The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...
Martinez, Neo D.; Tonin, Perrine; Bauer, Barbara; Rael, Rosalyn C.; Singh, Rahul; Yoon, Sangyuk; Yoon, Ilmi; Dunne, Jennifer A.
2012-01-01
Understanding ecological complexity has stymied scientists for decades. Recent elucidation of the famously coined "devious strategies for stability in enduring natural systems" has opened up a new field of computational analyses of complex ecological networks where the nonlinear dynamics of many interacting species can be more realistically mod-eled and understood. Here, we describe the first extension of this field to include coupled human-natural systems. This extension elucidates new strat...
Distinguishing humans from computers in the game of go: A complex network approach
Coquidé, C.; Georgeot, B.; Giraud, O.
2017-08-01
We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.
High performance parallel computing of flows in complex geometries: II. Applications
Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T
2009-01-01
Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.
Fast and accurate algorithm for the computation of complex linear canonical transforms.
Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus
2010-09-01
A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.
Cate, C.L.; Wagner, D.P.; Fussell, J.B.
1977-01-01
Common cause failure analysis, also called common mode failure analysis, is an integral part of a complete system reliability analysis. Existing methods of computer aided common cause failure analysis are extended by allowing analysis of the complex systems often encountered in practice. The methods aid in identifying potential common cause failures and also address quantitative common cause failure analysis
Marek, Michael W.; Wu, Wen-Chi Vivian
2014-01-01
This conceptual, interdisciplinary inquiry explores Complex Dynamic Systems as the concept relates to the internal and external environmental factors affecting computer assisted language learning (CALL). Based on the results obtained by de Rosnay ["World Futures: The Journal of General Evolution", 67(4/5), 304-315 (2011)], who observed…
A unified approach to computing real and complex zeros of zero-dimensional ideals
J.B. Lasserre; M. Laurent (Monique); P. Rostalski; M. Putinar; S. Sullivant
2009-01-01
textabstractIn this paper we propose a unified methodology for computing the set $V_K(I)$ of complex ($K = C$) or real ($K = R$) roots of an ideal $I$ in $R[x]$, assuming $V_K(I)$ is finite. We show how moment matrices, defined in terms of a given set of generators of the ideal I, can be used to
Computer analysis of potentiometric data of complexes formation in the solution
Jastrzab, Renata; Kaczmarek, Małgorzata T.; Tylkowski, Bartosz; Odani, Akira
2018-02-01
The determination of equilibrium constants is an important process for many branches of chemistry. In this review we provide the readers with a discussion on computer methods which have been applied for elaboration of potentiometric experimental data generated during complexes formation in solution. The review describes both: general basis of modeling tools and examples of the use of calculated stability constants.
Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling
Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas
2013-01-01
sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...
On a computational method for modelling complex ecosystems by superposition procedure
He Shanyu.
1986-12-01
In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref
Complexity and Intensionality in a Type-1 Framework for Computable Analysis
Lambov, Branimir Zdravkov
2005-01-01
This paper describes a type-1 framework for computable analysis designed to facilitate efficient implementations and discusses properties that have not been well studied before for type-1 approaches: the introduction of complexity measures for type-1 representations of real functions, and ways...
Atomic switch networks—nanoarchitectonic design of a complex system for natural computing
Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Gimzewski, J K; Aono, M; Stieg, A Z
2015-01-01
Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems. (paper)
Lee, Jay Min; Yang, Dong-Seok; Bunker, Grant B
2013-01-01
Using the FEFF kernel A(k,r), we describe the inverse computation from χ(k)-data to g(r)-solution in terms of a singularity regularization method based on complete Bayesian statistics process. In this work, we topologically decompose the system-matched invariant projection operators into two distinct types, (A + AA + A) and (AA + AA + ), and achieved Synthesized Topological Inversion Computation (STIC), by employing a 12-operator-closed-loop emulator of the symplectic transformation. This leads to a numerically self-consistent solution as the optimal near-singular regularization parameters are sought, dramatically suppressing instability problems connected with finite precision arithmetic in ill-posed systems. By statistically correlating a pair of measured data, it was feasible to compute an optimal EXAFS phase retrieval solution expressed in terms of the complex-valued χ(k), and this approach was successfully used to determine the optimal g(r) for a complex multi-component system.
Awe, O.E., E-mail: draweoe2004@yahoo.com; Oshakuade, O.M.
2016-04-15
A new method for calculating Infinite Dilute Activity Coefficients (γ{sup ∞}s) of binary liquid alloys has been developed. This method is basically computing γ{sup ∞}s from experimental thermodynamic integral free energy of mixing data using Complex formation model. The new method was first used to theoretically compute the γ{sup ∞}s of 10 binary alloys whose γ{sup ∞}s have been determined by experiments. The significant agreement between the computed values and the available experimental values served as impetus for applying the new method to 22 selected binary liquid alloys whose γ{sup ∞}s are either nonexistent or incomplete. In order to verify the reliability of the computed γ{sup ∞}s of the 22 selected alloys, we recomputed the γ{sup ∞}s using three other existing methods of computing or estimating γ{sup ∞}s and then used the γ{sup ∞}s obtained from each of the four methods (the new method inclusive) to compute thermodynamic activities of components of each of the binary systems. The computed activities were compared with available experimental activities. It is observed that the results from the method being proposed, in most of the selected alloys, showed better agreement with experimental activity data. Thus, the new method is an alternative and in certain instances, more reliable approach of computing γ{sup ∞}s of binary liquid alloys.
Krishnamoorthy Arumugam
2014-04-01
Full Text Available Applications of redox processes range over a number of scientific fields. This review article summarizes the theory behind the calculation of redox potentials in solution for species such as organic compounds, inorganic complexes, actinides, battery materials, and mineral surface-bound-species. Different computational approaches to predict and determine redox potentials of electron transitions are discussed along with their respective pros and cons for the prediction of redox potentials. Subsequently, recommendations are made for certain necessary computational settings required for accurate calculation of redox potentials. This article reviews the importance of computational parameters, such as basis sets, density functional theory (DFT functionals, and relativistic approaches and the role that physicochemical processes play on the shift of redox potentials, such as hydration or spin orbit coupling, and will aid in finding suitable combinations of approaches for different chemical and geochemical applications. Identifying cost-effective and credible computational approaches is essential to benchmark redox potential calculations against experiments. Once a good theoretical approach is found to model the chemistry and thermodynamics of the redox and electron transfer process, this knowledge can be incorporated into models of more complex reaction mechanisms that include diffusion in the solute, surface diffusion, and dehydration, to name a few. This knowledge is important to fully understand the nature of redox processes be it a geochemical process that dictates natural redox reactions or one that is being used for the optimization of a chemical process in industry. In addition, it will help identify materials that will be useful to design catalytic redox agents, to come up with materials to be used for batteries and photovoltaic processes, and to identify new and improved remediation strategies in environmental engineering, for example the
Sherwin, Jason
human activities. Nevertheless, since it is not constrained by computational details, the study of situational awareness provides a unique opportunity to approach complex tasks of operation from an analytical perspective. In other words, with SA, we get to see how humans observe, recognize and react to complex systems on which they exert some control. Reconciling this perspective on complexity with complex systems research, it might be possible to further our understanding of complex phenomena if we can probe the anatomical mechanisms by which we, as humans, do it naturally. At this unique intersection of two disciplines, a hybrid approach is needed. So in this work, we propose just such an approach. In particular, this research proposes a computational approach to the situational awareness (SA) of complex systems. Here we propose to implement certain aspects of situational awareness via a biologically-inspired machine-learning technique called Hierarchical Temporal Memory (HTM). In doing so, we will use either simulated or actual data to create and to test computational implementations of situational awareness. This will be tested in two example contexts, one being more complex than the other. The ultimate goal of this research is to demonstrate a possible approach to analyzing and understanding complex systems. By using HTM and carefully developing techniques to analyze the SA formed from data, it is believed that this goal can be obtained.
Krinsky, Jamin L.; Arnold, John; Bergman, Robert G.
2006-10-03
Monomeric thiosalicylaldiminate complexes of rhodium(I) and iridium(I) were prepared by ligand transfer from the homoleptic zinc(II) species. In the presence of strongly donating ligands, the iridium complexes undergo insertion of the metal into the imine carbon-hydrogen bond. Thiophenoxyketimines were prepared by non-templated reaction of o-mercaptoacetophenone with anilines, and were complexed with rhodium(I), iridium(I), nickel(II) and platinum(II). X-ray crystallographic studies showed that while the thiosalicylaldiminate complexes display planar ligand conformations, those of the thiophenoxyketiminates are strongly distorted. Results of a computational study were consistent with a steric-strain interpretation of the difference in preferred ligand geometries.
Development of an Evaluation Method for the Design Complexity of Computer-Based Displays
Kim, Hyoung Ju; Lee, Seung Woo; Kang, Hyun Gook; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2011-10-15
The importance of the design of human machine interfaces (HMIs) for human performance and the safety of process industries has long been continuously recognized for many decades. Especially, in the case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs because poor HMIs can impair the decision making ability of human operators. In order to support and increase the decision making ability of human operators, advanced HMIs based on the up-to-date computer technology are provided. Human operators in advanced main control room (MCR) acquire information through video display units (VDUs) and large display panel (LDP), which is required for the operation of NPPs. These computer-based displays contain a huge amount of information and present it with a variety of formats compared to those of a conventional MCR. For example, these displays contain more display elements such as abbreviations, labels, icons, symbols, coding, etc. As computer-based displays contain more information, the complexity of advanced displays becomes greater due to less distinctiveness of each display element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. This study covers the early phase in the development of an evaluation method for the design complexity of computer-based displays. To this end, a series of existing studies were reviewed to suggest an appropriate concept that is serviceable to unravel this problem
Kim, Min-Kyung; Eom, Ki Heon; Lim, Jun-Heok; Lee, Jea-Keun; Lee, Ju Dong; Won, Yong Sun [Pukyong National University, Busan (Korea, Republic of)
2015-11-15
The complexation of boric acid (B(OH){sub 3}), the primary form of aqueous boron at moderate pH, with polyols is proposed and mechanistically studied as an efficient way to improve membrane processes such as reverse osmosis (RO) for removing boron in seawater by increasing the size of aqueous boron compounds. Computational chemistry based on the density functional theory (DFT) was used to manifest the reaction pathways of the complexation of B(OH){sub 3} with various polyols such as glycerol, xylitol, and mannitol. The reaction energies were calculated as −80.6, −98.1, and −87.2 kcal/mol for glycerol, xylitol, and mannitol, respectively, indicating that xylitol is the most thermodynamically favorable for the complexation with B(OH){sub 3}. Moreover, the 1 : 2 molar ratio of B(OH)3 to polyol was found to be more favorable than the ratio of 1 : 1 for the complexation. Meanwhile, latest lab-scale actual RO experiments successfully supported our computational prediction that 2 moles of xylitol are the most effective as the complexing agent for 1 mole of B(OH){sub 3} in aqueous solution.
Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar
2012-01-01
An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...
Geroyannis, V.S.
1988-01-01
In this paper, a numerical method is developed for determining the structure distortion of a polytropic star which rotates either uniformly or differentially. This method carries out the required numerical integrations in the complex plane. The method is implemented to compute indicative quantities, such as the critical perturbation parameter which represents an upper limit in the rotational behavior of the star. From such indicative results, it is inferred that this method achieves impressive improvement against other relevant methods; most important, it is comparable to some of the most elaborate and accurate techniques on the subject. It is also shown that the use of this method with Chandrasekhar's first-order perturbation theory yields an immediate drastic improvement of the results. Thus, there is no neeed - for most applications concerning rotating polytropic models - to proceed to the further use of the method with higher order techniques, unless the maximum accuracy of the method is required. 31 references
Counting loop diagrams: computational complexity of higher-order amplitude evaluation
Eijk, E. van; Kleiss, R.; Lazopoulos, A.
2004-01-01
We discuss the computational complexity of the perturbative evaluation of scattering amplitudes, both by the Caravaglios-Moretti algorithm and by direct evaluation of the individual diagrams. For a self-interacting scalar theory, we determine the complexity as a function of the number of external legs. We describe a method for obtaining the number of topologically inequivalent Feynman graphs containing closed loops, and apply this to 1- and 2-loop amplitudes. We also compute the number of graphs weighted by their symmetry factors, thus arriving at exact and asymptotic estimates for the average symmetry factor of diagrams. We present results for the asymptotic number of diagrams up to 10 loops, and prove that the average symmetry factor approaches unity as the number of external legs becomes large. (orig.)
Wilke, DN
2012-07-01
Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...
A computer program for external modes in complex ionic crystals (the rigid molecular-ion model)
Chaplot, S.L.
1978-01-01
A computer program DISPR has been developed to calculate the external mode phonon dispersion relation in the harmonic approximation for complex ionic crystals using the rigid molecular ion model. A description of the program, the flow diagram and the required input information are given. A sample calculation for α-KNO 3 is presented. The program can handle any type of crystal lattice with any number of atoms and molecules per unit cell with suitable changes in dimension statements. (M.G.B.)
[The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].
Ganin, I P; Kaplan, A Ia
2014-01-01
The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.
Investigation of anticancer properties of caffeinated complexes via computational chemistry methods
Sayin, Koray; Üngördü, Ayhan
2018-03-01
Computational investigations were performed for 1,3,7-trimethylpurine-2,6-dione, 3,7-dimethylpurine-2,6-dione, their Ru(II) and Os(III) complexes. B3LYP/6-311 ++G(d,p)(LANL2DZ) level was used in numerical calculations. Geometric parameters, IR spectrum, 1H-, 13C and 15N NMR spectrum were examined in detail. Additionally, contour diagram of frontier molecular orbitals (FMOs), molecular electrostatic potential (MEP) maps, MEP contour and some quantum chemical descriptors were used in the determination of reactivity rankings and active sites. The electron density on the surface was similar to each other in studied complexes. Quantum chemical descriptors were investigated and the anticancer activity of complexes were more than cisplatin and their ligands. Additionally, molecular docking calculations were performed in water between related complexes and a protein (ID: 3WZE). The most interact complex was found as Os complex. The interaction energy was calculated as 342.9 kJ/mol.
Towards electromechanical computation: An alternative approach to realize complex logic circuits
Hafiz, Md Abdullah Al; Kosuru, Lakshmoji; Younis, Mohammad I.
2016-01-01
Electromechanical computing based on micro/nano resonators has recently attracted significant attention. However, full implementation of this technology has been hindered by the difficulty in realizing complex logic circuits. We report here an alternative approach to realize complex logic circuits based on multiple MEMS resonators. As case studies, we report the construction of a single-bit binary comparator, a single-bit 4-to-2 encoder, and parallel XOR/XNOR and AND/NOT logic gates. Toward this, several microresonators are electrically connected and their resonance frequencies are tuned through an electrothermal modulation scheme. The microresonators operating in the linear regime do not require large excitation forces, and work at room temperature and at modest air pressure. This study demonstrates that by reconfiguring the same basic building block, tunable resonator, several essential complex logic functions can be achieved.
Towards electromechanical computation: An alternative approach to realize complex logic circuits
Hafiz, M. A. A.
2016-08-18
Electromechanical computing based on micro/nano resonators has recently attracted significant attention. However, full implementation of this technology has been hindered by the difficulty in realizing complex logic circuits. We report here an alternative approach to realize complex logic circuits based on multiple MEMS resonators. As case studies, we report the construction of a single-bit binary comparator, a single-bit 4-to-2 encoder, and parallel XOR/XNOR and AND/NOT logic gates. Toward this, several microresonators are electrically connected and their resonance frequencies are tuned through an electrothermal modulation scheme. The microresonators operating in the linear regime do not require large excitation forces, and work at room temperature and at modest air pressure. This study demonstrates that by reconfiguring the same basic building block, tunable resonator, several essential complex logic functions can be achieved.
Fraass, Benedick A.; Lash, Kathy L.; Matrone, Gwynne M.; Volkman, Susan K.; McShan, Daniel L.; Kessler, Marc L.; Lichter, Allen S.
1998-01-01
Purpose: To analyze treatment delivery errors for three-dimensional (3D) conformal therapy performed at various levels of treatment delivery automation and complexity, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system (CCRS). Methods and Materials: All treatment delivery errors which occurred in our department during a 15-month period were analyzed. Approximately 34,000 treatment sessions (114,000 individual treatment segments [ports]) on four treatment machines were studied. All treatment delivery errors logged by treatment therapists or quality assurance reviews (152 in all) were analyzed. Machines 'M1' and 'M2' were operated in a standard manual setup mode, with no record and verify system (R/V). MLC machines 'M3' and 'M4' treated patients under the control of the CCRS system, which (1) downloads the treatment delivery plan from the planning system; (2) performs some (or all) of the machine set up and treatment delivery for each field; (3) monitors treatment delivery; (4) records all treatment parameters; and (5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3; therefore, it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments, nonaxial and noncoplanar plans, multisegment intensity modulation, and pseudoisocentric treatments studied for a 6-month period (505 patients) concurrent with the period in which the delivery errors were obtained. Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines; therefore, this analysis does not depend on fixed therapist staff on particular
Application of the Decomposition Method to the Design Complexity of Computer-based Display
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2012-05-15
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Application of the Decomposition Method to the Design Complexity of Computer-based Display
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun
2012-01-01
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.
2011-01-01
We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: → Use of e-science methods to search configurationally space. → Automated control of space searching. → Identify key structural features conveying stability. → Improved correlation of computed structures with experimental data.
LT^2C^2: A language of thought with Turing-computable Kolmogorov complexity
Santiago Figueira
2013-03-01
Full Text Available In this paper, we present a theoretical effort to connect the theory of program size to psychology by implementing a concrete language of thought with Turing-computable Kolmogorov complexity (LT^2C^2 satisfying the following requirements: 1 to be simple enough so that the complexity of any given finite binary sequence can be computed, 2 to be based on tangible operations of human reasoning (printing, repeating,. . . , 3 to be sufficiently powerful to generate all possible sequences but not too powerful as to identify regularities which would be invisible to humans. We first formalize LT^2C^2, giving its syntax and semantics, and defining an adequate notion of program size. Our setting leads to a Kolmogorov complexity function relative to LT^2C^2 which is computable in polynomial time, and it also induces a prediction algorithm in the spirit of Solomonoff’s inductive inference theory. We then prove the efficacy of this language by investigating regularities in strings produced by participants attempting to generate random strings. Participants had a profound understanding of randomness and hence avoided typical misconceptions such as exaggerating the number of alternations. We reasoned that remaining regularities would express the algorithmic nature of human thoughts, revealed in the form of specific patterns. Kolmogorov complexity relative to LT^2C^2 passed three expected tests examined here: 1 human sequences were less complex than control PRNG sequences, 2 human sequences were not stationary showing decreasing values of complexity resulting from fatigue 3 each individual showed traces of algorithmic stability since fitting of partial data was more effective to predict subsequent data than average fits. This work extends on previous efforts to combine notions of Kolmogorov complexity theory and algorithmic information theory to psychology, by explicitly proposing a language which may describe the patterns of human thoughts.Received: 12
Liang, Jie; Qian, Hong
2010-01-01
Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand "complex behavior" and complexity theory, and from which important biological insight can be gained.
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
Masoud, M.S.; Motaweh, H.A.; Ali, A.E.
1999-01-01
Full text.the electronic absorption spectra of the octahedral complexes containing monoethanolamine were recorded in different solvents (dioxine, chlororm, ethanol, dimethylformamide, dimethylsulfoxide and water). The data analyzed based on multiple linear regression technique using the equation: ya (a is the regression intercept) are various empirical solvent polarytiparameters; constants are calculated using micro statistic program on pc computer. The solvent spectral data of the complexes are compared to that of nugot, the solvent assists the spectral data to be red shifts. In case of Mn (MEA) CL complex, numerous bands are appeared in presence of CHCI DMF and DMSO solvents probably due to the numerous oxidation states. The solvent parameters: E (solvent-solute hydrogen bond and dipolar interaction); (dipolar interaction related to the dielectric constant); M (solute permanent dipole-solvent induced ipole) and N (solute permanent dipole-solvent permanent dipole) are correlated with the structure of the complexes, in hydrogen bonding solvents (Band in case of complexes as the dielectric constant increases, blue shift occurs in due to conjugation with high stability, the data in DMF and DMSO solvents are nearly the same probably due to their similarity
Computational model of dose response for low-LET-induced complex chromosomal aberrations
Eidelman, Y.A.; Andreev, S.G.
2015-01-01
Experiments with full-colour mFISH chromosome painting have revealed high yield of radiation-induced complex chromosomal aberrations (CAs). The ratio of complex to simple aberrations is dependent on cell type and linear energy transfer. Theoretical analysis has demonstrated that the mechanism of CA formation as a result of interaction between lesions at a surface of chromosome territories does not explain high complexes-to-simples ratio in human lymphocytes. The possible origin of high yields of γ-induced complex CAs was investigated in the present work by computer simulation. CAs were studied on the basis of chromosome structure and dynamics modelling and the hypothesis of CA formation on nuclear centres. The spatial organisation of all chromosomes in a human interphase nucleus was predicted by simulation of mitosis-to-interphase chromosome structure transition. Two scenarios of CA formation were analysed, 'static' (existing in a nucleus prior to irradiation) centres and 'dynamic' (formed in response to irradiation) centres. The modelling results reveal that under certain conditions, both scenarios explain quantitatively the dose-response relationships for both simple and complex γ-induced inter-chromosomal exchanges observed by mFISH chromosome painting in the first post-irradiation mitosis in human lymphocytes. (authors)
Kim, Dong Hyun; Bang, Duck Won; Cho, Yoon Haeng; Suk, Eun Ha
2011-01-01
To delineate complex plaque morphology in patients with stable angina using coronary computed tomographic angiography (CTA). 36 patients with complex plaques proven by conventional coronary angiography (CAG), who had taken CTA for evaluation of typical angina, were enrolled in this study. Intravascular ultrasonography (IVUS) was performed in 14 patients (16 lesions). We compared CTA with CAG for plaque features and analyzed vascular cutoff, intraluminal filling defect in a patent vessel, irregularity of plaque, and ulceration. Also, the density of plaque was evaluated on CTA. CAG and CTA showed complex morphology in 44 cases (100%) and 34 cases, (77%), respectively, with features including abrupt vessel cutoff (27 vs. 16%, κ=0.57), intraluminal filling defect (32 vs. 30%, κ=0.77), irregularity (75 vs. 52%, κ=0.52), and ulceration (16 vs. 11%, κ=0.60). CTA indicated that the complex lesions were hypodense (mean 66 ± 21 Houndsfield Units). CTA is a very accurate and useful non-invasive imaging modality for evaluating complex plaque in patients with typical angina.
Kim, Dong Hyun [Dept. of Radiology, Soonchunhyang University Hospital, Bucheon (Korea, Republic of); Bang, Duck Won; Cho, Yoon Haeng [Dept. of Internal Medicine, Soonchunhyang University Hospital, Bucheon (Korea, Republic of); Suk, Eun Ha [Dept. of Anesthyesiology and Pain Medicine, Asan Medical Center, Seoul (Korea, Republic of)
2011-04-15
To delineate complex plaque morphology in patients with stable angina using coronary computed tomographic angiography (CTA). 36 patients with complex plaques proven by conventional coronary angiography (CAG), who had taken CTA for evaluation of typical angina, were enrolled in this study. Intravascular ultrasonography (IVUS) was performed in 14 patients (16 lesions). We compared CTA with CAG for plaque features and analyzed vascular cutoff, intraluminal filling defect in a patent vessel, irregularity of plaque, and ulceration. Also, the density of plaque was evaluated on CTA. CAG and CTA showed complex morphology in 44 cases (100%) and 34 cases, (77%), respectively, with features including abrupt vessel cutoff (27 vs. 16%, {kappa}=0.57), intraluminal filling defect (32 vs. 30%, {kappa}=0.77), irregularity (75 vs. 52%, {kappa}=0.52), and ulceration (16 vs. 11%, {kappa}=0.60). CTA indicated that the complex lesions were hypodense (mean 66 {+-} 21 Houndsfield Units). CTA is a very accurate and useful non-invasive imaging modality for evaluating complex plaque in patients with typical angina.
A computational approach to modeling cellular-scale blood flow in complex geometry
Balogh, Peter; Bagchi, Prosenjit
2017-04-01
We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.
Enkelejda Miho
2018-02-01
Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.
Blanco-Elorrieta, Esti; Pylkkänen, Liina
2016-01-01
What is the neurobiological basis of our ability to create complex messages with language? Results from multiple methodologies have converged on a set of brain regions as relevant for this general process, but the computational details of these areas remain to be characterized. The left anterior temporal lobe (LATL) has been a consistent node within this network, with results suggesting that although it rather systematically shows increased activation for semantically complex structured stimuli, this effect does not extend to number phrases such as 'three books.' In the present work we used magnetoencephalography to investigate whether numbers in general are an invalid input to the combinatory operations housed in the LATL or whether the lack of LATL engagement for stimuli such as 'three books' is due to the quantificational nature of such phrases. As a relevant test case, we employed complex number terms such as 'twenty-three', where one number term is not a quantifier of the other but rather, the two terms form a type of complex concept. In a number naming paradigm, participants viewed rows of numbers and depending on task instruction, named them as complex number terms ('twenty-three'), numerical quantifications ('two threes'), adjectival modifications ('blue threes') or non-combinatory lists (e.g., 'two, three'). While quantificational phrases failed to engage the LATL as compared to non-combinatory controls, both complex number terms and adjectival modifications elicited a reliable activity increase in the LATL. Our results show that while the LATL does not participate in the enumeration of tokens within a set, exemplified by the quantificational phrases, it does support conceptual combination, including the composition of complex number concepts. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Stop: a fast procedure for the exact computation of the performance of complex probabilistic systems
Corynen, G.C.
1982-01-01
A new set-theoretic method for the exact and efficient computation of the probabilistic performance of complex systems has been developed. The core of the method is a fast algorithm for disjointing a collection of product sets which is intended for systems with more than 1000 components and 100,000 cut sets. The method is based on a divide-and-conquer approach, in which a multidimensional problem is progressively decomposed into lower-dimensional subproblems along its dimensions. The method also uses a particular pointer system that eliminates the need to store the subproblems by only requiring the storage of pointers to those problems. Examples of the algorithm and the divide-and-conquer strategy are provided, and comparisons with other significant methods are made. Statistical complexity studies show that the expected time and space complexity of other methods is O(me/sup n/), but that our method is O(nm 3 log(m)). Problems which would require days of Cray-1 computer time with present methods can now be solved in seconds. Large-scale systems that can only be approximated with other techniques can now also be evaluated exactly
Lectures on a theory of computation and complexity over the reals (or an arbitrary ring)
Blum, L.
1990-01-01
These lectures will discuss a new theory of computation and complexity which attempts to integrate key ideas from the classical theory in a setting more amenable to problems defined over continuous domains. The approach taken here is both algebraic and concrete; the underlying space is an arbitrary ring (or field) and the basic operations are polynominal (or rational) maps and tests. This approach yields results in the continuous setting analogous to the pivotal classical results of undecidability and NP-completeness over the integers, yet reflecting the special mathematical character of the underlying space. The goal of these lectures is to highlight key aspects of the new theory as well as to give exposition, in this setting, of classical ideas and results. Indeed, since this new theory is more mathematical, perhaps less dependent on logic than the classical theory, a number of key results have more straightforward and transparent proofs in this setting. One of our themes will be the comparison of results over the integers with results over the reals and complex numbers. Contrasting one theory with the other will help illuminate each, and give deeper understanding to such basic concepts as decidability, definability, computability, and complexity. 53 refs
Phase transition and computational complexity in a stochastic prime number generator
Lacasa, L; Luque, B [Departamento de Matematica Aplicada y EstadIstica, ETSI Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Miramontes, O [Departamento de Sistemas Complejos, Instituto de FIsica, Universidad Nacional Autonoma de Mexico, Mexico 01415 DF (Mexico)], E-mail: lucas@dmae.upm.es
2008-02-15
We introduce a prime number generator in the form of a stochastic algorithm. The character of this algorithm gives rise to a continuous phase transition which distinguishes a phase where the algorithm is able to reduce the whole system of numbers into primes and a phase where the system reaches a frozen state with low prime density. In this paper, we firstly present a broader characterization of this phase transition, both in analytical and numerical terms. Critical exponents are calculated, and data collapse is provided. Further on, we redefine the model as a search problem, fitting it in the hallmark of computational complexity theory. We suggest that the system belongs to the class NP. The computational cost is maximal around the threshold, as is common in many algorithmic phase transitions, revealing the presence of an easy-hard-easy pattern. We finally relate the nature of the phase transition to an average-case classification of the problem.
Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-03-01
We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-01-01
We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow
Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)
2015-03-01
We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
Chardon, Gilles; Daudet, Laurent
2013-11-01
This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.
Improvement of computer complex and interface system for compact nuclear simulator
Lee, D. Y.; Park, W. M.; Cha, K. H.; Jung, C. H.; Park, J. C.
1999-01-01
CNS(Compact Nuclear Simulator) was developed at the end of 1980s, and have been used as training simulator for staffs of KAERI during 10 years. The operator panel interface cards and the graphic interface cards were designed with special purpose only for CNS. As these interface cards were worn out for 10 years, it was very difficult to get spare parts and to repair them. And the interface cards were damaged by over current happened by shortage of lamp in the operator panel. To solve these problem, the project 'Improvement of Compact Nuclear Simulator' was started from 1997. This paper only introduces about the improvement of computer complex and interface system
Field programmable gate array-assigned complex-valued computation and its limits
Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)
2014-09-15
We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-01-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories
Robert Logozar
2011-12-01
Full Text Available We present the modeling of dynamical systems and finding of their complexity indicators by the use of concepts from computation and information theories, within the framework of J. P. Crutchfield's theory of ε-machines. A short formal outline of the ε-machines is given. In this approach, dynamical systems are analyzed directly from the time series that is received from a properly adjusted measuring instrument. The binary strings are parsed through the parse tree, within which morphologically and probabilistically unique subtrees or morphs are recognized as system states. The outline and precise interrelation of the information-theoretic entropies and complexities emanating from the model is given. The paper serves also as a theoretical foundation for the future presentation of the DSA program that implements the ε-machines modeling up to the stochastic finite automata level.
Computation of resonances by two methods involving the use of complex coordinates
Bylicki, M.; Nicolaides, C.A.
1993-01-01
We have studied two different systems producing resonances, a highly excited multielectron Coulombic negative ion (the He - 2s2p 2 4 P state) and a hydrogen atom in a magnetic field, via the complex-coordinate rotation (CCR) and the state-specific complex-eigenvalue Schroedinger equation (CESE) approaches. For the He - 2s2p 2 4 P resonance, a series of large CCR calculations, up to 353 basis functions with explicit r ij dependence, were carried out to serve as benchmarks. For the magnetic-field problem, the CCR results were taken from the literature. Comparison shows that the state-specific CESE theory allows the physics of the problem to be incorporated systematically while keeping the overall size of the computation tractable regardless of the number of electrons
Impact of familiarity on information complexity in human-computer interfaces
Bakaev Maxim
2016-01-01
Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.
Baichoo, Shakuntala; Ouzounis, Christos A
A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.
High performance parallel computing of flows in complex geometries: I. Methods
Gourdain, N; Gicquel, L; Montagnac, M; Vermorel, O; Staffelbach, G; Garcia, M; Boussuge, J-F; Gazaix, M; Poinsot, T
2009-01-01
Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.
Shibeko, Alexey M; Panteleev, Mikhail A
2016-05-01
Blood coagulation is a complex biochemical network that plays critical roles in haemostasis (a physiological process that stops bleeding on injury) and thrombosis (pathological vessel occlusion). Both up- and down-regulation of coagulation remain a major challenge for modern medicine, with the ultimate goal to correct haemostasis without causing thrombosis and vice versa. Mathematical/computational modelling is potentially an important tool for understanding blood coagulation disorders and their treatment. It can save a huge amount of time and resources, and provide a valuable alternative or supplement when clinical studies are limited, or not ethical, or technically impossible. This article reviews contemporary state of the art in the modelling of blood coagulation for practical purposes: to reveal the molecular basis of a disease, to understand mechanisms of drug action, to predict pharmacodynamics and drug-drug interactions, to suggest potential drug targets or to improve quality of diagnostics. Different model types and designs used for this are discussed. Functional mechanisms of procoagulant bypassing agents and investigations of coagulation inhibitors were the two particularly popular applications of computational modelling that gave non-trivial results. Yet, like any other tool, modelling has its limitations, mainly determined by insufficient knowledge of the system, uncertainty and unreliability of complex models. We show how to some extent this can be overcome and discuss what can be expected from the mathematical modelling of coagulation in not-so-far future. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Jay S. Coggan
2015-09-01
Full Text Available Despite intense research, few treatments are available for most neurological disorders. Demyelinating diseases are no exception. This is perhaps not surprising considering the multifactorial nature of these diseases, which involve complex interactions between immune system cells, glia and neurons. In the case of multiple sclerosis, for example, there is no unanimity among researchers about the cause or even which system or cell type could be ground zero. This situation precludes the development and strategic application of mechanism-based therapies. We will discuss how computational modeling applied to questions at different biological levels can help link together disparate observations and decipher complex mechanisms whose solutions are not amenable to simple reductionism. By making testable predictions and revealing critical gaps in existing knowledge, such models can help direct research and will provide a rigorous framework in which to integrate new data as they are collected. Nowadays, there is no shortage of data; the challenge is to make sense of it all. In that respect, computational modeling is an invaluable tool that could, ultimately, transform how we understand, diagnose, and treat demyelinating diseases.
COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES
M. M. Biliaiev
2016-04-01
Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which
Gordon, Sanford; Mcbride, Bonnie J.
1994-01-01
This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.
Simonsen, Jakob Grue
2009-01-01
We consider the computational complexity of languages of symbolic dynamical systems. In particular, we study complexity hierarchies and membership of the non-uniform class P/poly. We prove: 1.For every time-constructible, non-decreasing function t(n)=@w(n), there is a symbolic dynamical system...... with language decidable in deterministic time O(n^2t(n)), but not in deterministic time o(t(n)). 2.For every space-constructible, non-decreasing function s(n)=@w(n), there is a symbolic dynamical system with language decidable in deterministic space O(s(n)), but not in deterministic space o(s(n)). 3.There...... are symbolic dynamical systems having hard and complete languages under @?"m^l^o^g^s- and @?"m^p-reduction for every complexity class above LOGSPACE in the backbone hierarchy (hence, P-complete, NP-complete, coNP-complete, PSPACE-complete, and EXPTIME-complete sets). 4.There are decidable languages of symbolic...
Communication complexity of distributed computing and a parallel algorithm for polynomial roots
Tiwari, P.
1986-01-01
The first part of this thesis begins with a discussion of the minimum communication requirements in some distributed networks. The main result is a general technique for determining lower bounds on the communication complexity of problems on various distributed computer networks. This general technique is derived by simulating the general network by a linear array and then using a lower bound on the communication complexity of the problem on the linear array. Applications of this technique yield nontrivial optimal or near-optimal lower bounds on the communication complexity of distinctness, ranking, uniqueness, merging, and triangle detection on a ring, a mesh, and a complete binary tree of processors. A technique similar to the one used in proving the above results, yields interesting graph theoretic results concerning decomposition of a graph into complete bipartite subgraphs. The second part of the this is devoted to the design of a fast parallel algorithm for determining all roots of a polynomial. Given a polynomial rho(z) of degree n with m bit integer coefficients and an integer μ, the author considers the problem of determining all its roots with error less than 2/sup -μ/. It is shown that this problem is in the class NC if rho(z) has all real roots
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Wang, Lan-Ting; Lee, Kun-Chou
2014-01-01
The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…
A user's manual of Tools for Error Estimation of Complex Number Matrix Computation (Ver.1.0)
Ichihara, Kiyoshi.
1997-03-01
'Tools for Error Estimation of Complex Number Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the complex number linear system's solutions or the Hermitian matrices' eigen values. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calulate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The error estimation subroutines for Hermitian matrix eigen values' derive the error ranges of the eigen values according to the Korn-Kato's formula. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)
Mir, Jan Mohammad; Jain, N; Jaget, P S; Maurya, R C
2017-09-01
Photodynamic therapy (PDT) is a treatment that uses photosensitizing agents to kill cancer cells. Scientific community has been eager for decades to design an efficient PDT drug. Under such purview, the current report deals with the computational photodynamic behavior of ruthenium(II) nitrosyl complex containing N, N'-salicyldehyde-ethylenediimine (SalenH 2 ), the synthesis and X-ray crystallography of which is already known [Ref. 38,39]. Gaussian 09W software package was employed to carry out the density functional (DFT) studies. DFT calculations with Becke-3-Lee-Yang-Parr (B3LYP)/Los Alamos National Laboratory 2 Double Z (LanL2DZ) specified for Ru atom and B3LYP/6-31G(d,p) combination for all other atoms were used using effective core potential method. Both, the ground and excited states of the complex were evolved. Some known photosensitizers were compared with the target complex. Pthalocyanine and porphyrin derivatives were the compounds selected for the respective comparative study. It is suggested that effective photoactivity was found due to the presence of ruthenium core in the model complex. In addition to the evaluation of theoretical aspects in vitro anticancer aspects against COLO-205 human cancer cells have also been carried out with regard to the complex. More emphasis was laid to extrapolate DFT to depict the chemical power of the target compound to release nitric oxide. A promising visible light triggered nitric oxide releasing power of the compound has been inferred. In vitro antiproliferative studies of [RuCl 3 (PPh 3 ) 3 ] and [Ru(NO)(Salen)(Cl)] have revealed the model complex as an excellent anticancer agent. From IC 50 values of 40.031mg/mL in former and of 9.74mg/mL in latter, it is established that latter bears more anticancer potentiality. From overall study the DFT based structural elucidation and the efficiency of NO, Ru and Salen co-ligands has shown promising drug delivery property and a good candidacy for both chemotherapy as well as
Computational studies of a paramagnetic planar dibenzotetraaza[14]annulene Ni(II) complex.
Rabaâ, Hassan; Khaledi, Hamid; Olmstead, Marilyn M; Sundholm, Dage
2015-05-28
A square-planar Ni(II) dibenzotetraaza[14]annulene complex substituted with two 3,3-dimethylindolenine groups in the meso positions has recently been synthesized and characterized experimentally. In the solid-state, the Ni(II) complex forms linear π-interacting stacks with Ni···Ni separations of 3.448(2) Å. Measurements of the temperature dependence of the magnetic susceptibility revealed a drastic change in the magnetic properties at a temperature of 13 K, indicating a transition from low-to-high spin states. The molecular structures of the free-base ligand, the lowest singlet, and triplet states of the monomer and the dimer of the Ni complex have been studied computationally using density functional theory (DFT) and ab initio correlation levels of theory. In calculations at the second-order Møller-Plesset (MP2) perturbation theory level, a large energy of 260 kcal mol(-1) was obtained for the singlet-triplet splitting, suggesting that an alternative explanation of the observed magnetic properties is needed. The large energy splitting between the singlet and triplet states suggests that the observed change in the magnetism at very low temperatures is due to spin-orbit coupling effects originating from weak interactions between the fine-structure states of the Ni cations in the complex. The lowest electronic excitation energies of the dibenzotetraaza[14]annulene Ni(II) complex calculated at the time-dependent density functional theory (TDDFT) levels are in good agreement with values deduced from the experimental UV-vis spectrum. Calculations at the second-order algebraic-diagrammatic construction (ADC(2)) level on the dimer of the meso-substituted 3,3-dimethylindolenine dibenzotetraaza[14] annulene Ni(II) complex yielded Stokes shifts of 85-100 nm for the lowest excited singlet states. Calculations of the strength of the magnetically induced ring current for the free-base 3,3-dimethylindolenine-substituted dibenzotetraaza[14]annulene show that the annulene
Abel, Martin; Frommhold, Lothar; Li, Xiaoping; Hunt, Katharine L. C.
2012-06-01
The interaction-induced absorption by collisional pairs of H{_2} molecules is an important opacity source in the atmospheres of various types of planets and cool stars, such as late stars, low-mass stars, brown dwarfs, cool white dwarf stars, the ambers of the smaller, burnt out main sequence stars, exoplanets, etc., and therefore of special astronomical interest The emission spectra of cool white dwarf stars differ significantly in the infrared from the expected blackbody spectra of their cores, which is largely due to absorption by collisional H{_2}-H{_2}, H{_2}-He, and H{_2}-H complexes in the stellar atmospheres. Using quantum-chemical methods we compute the atmospheric absorption from hundreds to thousands of kelvin. Laboratory measurements of interaction-induced absorption spectra by H{_2} pairs exist only at room temperature and below. We show that our results reproduce these measurements closely, so that our computational data permit reliable modeling of stellar atmosphere opacities even for the higher temperatures. First results for H_2-He complexes have already been applied to astrophysical models have shown great improvements in these models. L. Frommhold, Collision-Induced Absorption in Gases, Cambridge University Press, Cambridge, New York, 1993 and 2006 X. Li, K. L. C. Hunt, F. Wang, M. Abel, and L. Frommhold, Collision-Induced Infrared Absorption by Molecular Hydrogen Pairs at Thousands of Kelvin, Int. J. of Spect., vol. 2010, Article ID 371201, 11 pages, 2010. doi: 10.1155/2010/371201 M. Abel, L. Frommhold, X. Li, and K. L. C. Hunt, Collision-induced absorption by H{_2} pairs: From hundreds to thousands of Kelvin, J. Phys. Chem. A, 115, 6805-6812, 2011} L. Frommhold, M. Abel, F. Wang, M. Gustafsson, X. Li, and K. L. C. Hunt, "Infrared atmospheric emission and absorption by simple molecular complexes, from first principles", Mol. Phys. 108, 2265, 2010 M. Abel, L. Frommhold, X. Li, and K. L. C. Hunt, Infrared absorption by collisional H_2-He complexes
Analysis and computer simulation for transient flow in complex system of liquid piping
Mitry, A.M.
1985-01-01
This paper is concerned with unsteady state analysis and development of a digital computer program, FLUTRAN, that performs a simulation of transient flow behavior in a complex system of liquid piping. The program calculates pressure and flow transients in the liquid filled piping system. The analytical model is based on the method of characteristics solution to the fluid hammer continuity and momentum equations. The equations are subject to wide variety of boundary conditions to take into account the effect of hydraulic devices. Water column separation is treated as a boundary condition with known head. Experimental tests are presented that exhibit transients induced by pump failure and valve closure in the McGuire Nuclear Station Low Level Intake Cooling Water System. Numerical simulation is conducted to compare theory with test data. Analytical and test data are shown to be in good agreement and provide validation of the model
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Experimental and computational fluid dynamics studies of mixing of complex oral health products
Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team
2017-11-01
Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).
Groebner bases for finite-temperature quantum computing and their complexity
Crompton, P. R.
2011-01-01
Following the recent approach of using order domains to construct Groebner bases from general projective varieties, we examine the parity and time-reversal arguments relating to the Wightman axioms of quantum field theory and propose that the definition of associativity in these axioms should be introduced a posteriori to the cluster property in order to generalize the anyon conjecture for quantum computing to indefinite metrics. We then show that this modification, which we define via ideal quotients, does not admit a faithful representation of the Braid group, because the generalized twisted inner automorphisms that we use to reintroduce associativity are only parity invariant for the prime spectra of the exterior algebra. We then use a coordinate prescription for the quantum deformations of toric varieties to show how a faithful representation of the Braid group can be reconstructed and argue that for a degree reverse lexicographic (monomial) ordered Groebner basis, the complexity class of this problem is bounded quantum polynomial.
Experiment and computation: a combined approach to study the van der Waals complexes
Surin L.A.
2017-01-01
Full Text Available A review of recent results on the millimetre-wave spectroscopy of weakly bound van der Waals complexes, mostly those which contain H2 and He, is presented. In our work, we compared the experimental spectra to the theoretical bound state results, thus providing a critical test of the quality of the M–H2 and M–He potential energy surfaces (PESs which are a key issue for reliable computations of the collisional excitation and de-excitation of molecules (M = CO, NH3, H2O in the dense interstellar medium. The intermolecular interactions with He and H2 play also an important role for high resolution spectroscopy of helium or para-hydrogen clusters doped by a probe molecule (CO, HCN. Such experiments are directed on the detection of superfluid response of molecular rotation in the He and p-H2 clusters.
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
Danielle S Bassett
2010-04-01
Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.
Yu, Xinguang; Li, Lianfeng; Wang, Peng; Yin, Yiheng; Bu, Bo; Zhou, Dingbiao
2014-07-01
This study was designed to report our preliminary experience with stabilization procedures for complex craniovertebral junction malformation (CVJM) using intraoperative computed tomography (iCT) with an integrated neuronavigation system (NNS). To evaluate the workflow, feasibility and clinical outcome of stabilization procedures using iCT image-guided navigation for complex CVJM. The stabilization procedures in CVJM are complex because of the area's intricate geometry and bony structures, its critical relationship to neurovascular structures and the intricate biomechanical issues involved. A sliding gantry 40-slice computed tomography scanner was installed in a preexisting operating room. The images were transferred directly from the scanner to the NNS using an automated registration system. On the basis of the analysis of intraoperative computed tomographic images, 23 cases (11 males, 12 females) with complicated CVJM underwent navigated stabilization procedures to allow more control over screw placement. The age of these patients were 19-52 years (mean: 33.5 y). We performed C1-C2 transarticular screw fixation in 6 patients to produce atlantoaxial arthrodesis with better reliability. Because of a high-riding transverse foramen on at least 1 side of the C2 vertebra and an anomalous vertebral artery position, 7 patients underwent C1 lateral mass and C2 pedicle screw fixation. Ten additional patients were treated with individualized occipitocervical fixation surgery from the hypoplasia of C1 or constraints due to C2 bone structure. In total, 108 screws were inserted into 23 patients using navigational assistance. The screws comprised 20 C1 lateral mass screws, 26 C2, 14 C3, or 4 C4 pedicle screws, 32 occipital screws, and 12 C1-C2 transarticular screws. There were no vascular or neural complications except for pedicle perforations that were detected in 2 (1.9%) patients and were corrected intraoperatively without any persistent nerves or vessel damage. The overall
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
Chantal Basurto
2015-12-01
Full Text Available Complex Fenestration Systems (CFS are advanced daylighting systems that are placed on the upper part of a window to improve the indoor daylight distribution within rooms. Due to their double function of daylight redirection and solar protection, they are considered as a solution to mitigate the unfavorable effects due to the admission of direct sunlight in buildings located in prevailing sunny climates (risk of glare and overheating. Accordingly, an adequate assessment of their performance should include an annual evaluation of the main aspects relevant to the use of daylight in such regions: the indoor illuminance distribution, thermal comfort, and visual comfort of the occupant’s. Such evaluation is possible with the use of computer simulations combined with the bi-directional scattering distribution function (BSDF data of these systems. This study explores the use of available methods to assess the visible and thermal annual performance of five different CFS using advanced computer simulations. To achieve results, an on-site daylight monitoring was carried out in a building located in a predominantly sunny climate location, and the collected data was used to create and calibrate a virtual model used to carry-out the simulations. The results can be employed to select the CFS, which improves visual and thermal interior environment for the occupants.
Pan, Yuliang; Wang, Zixiang; Zhan, Weihua; Deng, Lei
2018-05-01
Identifying RNA-binding residues, especially energetically favored hot spots, can provide valuable clues for understanding the mechanisms and functional importance of protein-RNA interactions. Yet, limited availability of experimentally recognized energy hot spots in protein-RNA crystal structures leads to the difficulties in developing empirical identification approaches. Computational prediction of RNA-binding hot spot residues is still in its infant stage. Here, we describe a computational method, PrabHot (Prediction of protein-RNA binding hot spots), that can effectively detect hot spot residues on protein-RNA binding interfaces using an ensemble of conceptually different machine learning classifiers. Residue interaction network features and new solvent exposure characteristics are combined together and selected for classification with the Boruta algorithm. In particular, two new reference datasets (benchmark and independent) have been generated containing 107 hot spots from 47 known protein-RNA complex structures. In 10-fold cross-validation on the training dataset, PrabHot achieves promising performances with an AUC score of 0.86 and a sensitivity of 0.78, which are significantly better than that of the pioneer RNA-binding hot spot prediction method HotSPRing. We also demonstrate the capability of our proposed method on the independent test dataset and gain a competitive advantage as a result. The PrabHot webserver is freely available at http://denglab.org/PrabHot/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.
Vasilieva, V. N.
2017-11-01
The article deals with the solution of problems in AutoCAD offered at the All-Russian student Olympiads at the section of “Computer graphics” that are not typical for the students of construction specialties. The students are provided with the opportunity to study the algorithm for solving original tasks of high complexity. The article shows how the unknown parameter underlying the construction can be determined using a parametric drawing with geometric constraints and dimensional dependencies. To optimize the mark-up operation, the use of the command for projecting the points and lines of different types onto bodies and surfaces in different directions is shown. For the construction of a spring with a different pitch of turns, the paper describes the creation of a block from a part of the helix and its scaling when inserted into a model with unequal coefficients along the axes. The advantage of the NURBS surface and the application of the “body-surface-surface-NURBS-body” conversion are reflected to enhance the capabilities of both solid and surface modeling. The article’s material introduces construction students into the method of constructing complex models in AutoCAD that are not similar to typical training assignments.
Automated a complex computer aided design concept generated using macros programming
Rizal Ramly, Mohammad; Asrokin, Azharrudin; Abd Rahman, Safura; Zulkifly, Nurul Ain Md
2013-12-01
Changing a complex Computer Aided design profile such as car and aircraft surfaces has always been difficult and challenging. The capability of CAD software such as AutoCAD and CATIA show that a simple configuration of a CAD design can be easily modified without hassle, but it is not the case with complex design configuration. Design changes help users to test and explore various configurations of the design concept before the production of a model. The purpose of this study is to look into macros programming as parametric method of the commercial aircraft design. Macros programming is a method where the configurations of the design are done by recording a script of commands, editing the data value and adding a certain new command line to create an element of parametric design. The steps and the procedure to create a macro programming are discussed, besides looking into some difficulties during the process of creation and advantage of its usage. Generally, the advantages of macros programming as a method of parametric design are; allowing flexibility for design exploration, increasing the usability of the design solution, allowing proper contained by the model while restricting others and real time feedback changes.
Automated a complex computer aided design concept generated using macros programming
Ramly, Mohammad Rizal; Asrokin, Azharrudin; Rahman, Safura Abd; Zulkifly, Nurul Ain Md
2013-01-01
Changing a complex Computer Aided design profile such as car and aircraft surfaces has always been difficult and challenging. The capability of CAD software such as AutoCAD and CATIA show that a simple configuration of a CAD design can be easily modified without hassle, but it is not the case with complex design configuration. Design changes help users to test and explore various configurations of the design concept before the production of a model. The purpose of this study is to look into macros programming as parametric method of the commercial aircraft design. Macros programming is a method where the configurations of the design are done by recording a script of commands, editing the data value and adding a certain new command line to create an element of parametric design. The steps and the procedure to create a macro programming are discussed, besides looking into some difficulties during the process of creation and advantage of its usage. Generally, the advantages of macros programming as a method of parametric design are; allowing flexibility for design exploration, increasing the usability of the design solution, allowing proper contained by the model while restricting others and real time feedback changes
Jastrzebski, Robin; Quesne, Matthew G; Weckhuysen, Bert M; de Visser, Sam P; Bruijnincx, Pieter C A
2014-01-01
Catechol intradiol dioxygenation is a unique reaction catalyzed by iron-dependent enzymes and non-heme iron(III) complexes. The mechanism by which these systems activate dioxygen in this important metabolic process remains controversial. Using a combination of kinetic measurements and computational modelling of multiple iron(III) catecholato complexes, we have elucidated the catechol cleavage mechanism and show that oxygen binds the iron center by partial dissociation of the substrate from the iron complex. The iron(III) superoxide complex that is formed subsequently attacks the carbon atom of the substrate by a rate-determining C=O bond formation step. PMID:25322920
Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.
1991-01-01
New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments
Greco, Claudio; Moro, Giorgio; Bertini, Luca; Biczysko, Malgorzata; Barone, Vincenzo; Cosentino, Ugo
2014-02-11
differences between calculated and experimental results are observed for compound 4, for which difficulties in the experimental determination of the triplet state energy were encountered: our results show that the negligible photoluminescence quantum yield of this compound is due to the fact that the energy of the most stable triplet state is significantly lower than that of the resonance level of the Europium ion, and thus the energy transfer process is prevented. These results confirm the reliability of the adopted computational approach in calculating the energy of the lowest triplet state energy of these systems, a key parameter in the design of new ligands for lanthanide complexes presenting large photoluminescence quantum yields.
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael
2017-04-01
Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Andrew F. Heckler
2016-06-01
Full Text Available In experiments including over 450 university-level students, we studied the effectiveness and time efficiency of several levels of feedback complexity in simple, computer-based training utilizing static question sequences. The learning domain was simple vector math, an essential skill in introductory physics. In a unique full factorial design, we studied the relative effects of “knowledge of correct response” feedback and “elaborated feedback” (i.e., a general explanation both separately and together. A number of other factors were analyzed, including training time, physics course grade, prior knowledge of vector math, and student beliefs about both their proficiency in and the importance of vector math. We hypothesize a simple model predicting how the effectiveness of feedback depends on prior knowledge, and the results confirm this knowledge-by-treatment interaction. Most notably, elaborated feedback is the most effective feedback, especially for students with low prior knowledge and low course grade. In contrast, knowledge of correct response feedback was less effective for low-performing students, and including both kinds of feedback did not significantly improve performance compared to elaborated feedback alone. Further, while elaborated feedback resulted in higher scores, the learning rate was at best only marginally higher because the training time was slightly longer. Training time data revealed that students spent significantly more time on the elaborated feedback after answering a training question incorrectly. Finally, we found that training improved student self-reported proficiency and that belief in the importance of the learned domain improved the effectiveness of training. Overall, we found that computer based training with static question sequences and immediate elaborated feedback in the form of simple and general explanations can be an effective way to improve student performance on a physics essential skill
Mitchell, Christine M.
1993-01-01
This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.
Computer Simulation of Complex Power System Faults under various Operating Conditions
Khandelwal, Tanuj; Bowman, Mark
2015-01-01
A power system is normally treated as a balanced symmetrical three-phase network. When a fault occurs, the symmetry is normally upset, resulting in unbalanced currents and voltages appearing in the network. For the correct application of protection equipment, it is essential to know the fault current distribution throughout the system and the voltages in different parts of the system due to the fault. There may be situations where protection engineers have to analyze faults that are more complex than simple shunt faults. One type of complex fault is an open phase condition that can result from a fallen conductor or failure of a breaker pole. In the former case, the condition is often accompanied by a fault detectable with normal relaying. In the latter case, the condition may be undetected by standard line relaying. The effect on a generator is dependent on the location of the open phase and the load level. If an open phase occurs between the generator terminals and the high-voltage side of the GSU in the switchyard, and the generator is at full load, damaging negative sequence current can be generated. However, for the same operating condition, an open conductor at the incoming transmission lines located in the switchyard can result in minimal negative sequence current. In 2012, a nuclear power generating station (NPGS) suffered series or open phase fault due to insulator mechanical failure in the 345 kV switchyard. This resulted in both reactor units tripping offline in two separate incidents. Series fault on one of the phases resulted in voltage imbalance that was not detected by the degraded voltage relays. These under-voltage relays did not initiate a start signal to the emergency diesel generators (EDG) because they sensed adequate voltage on the remaining phases exposing a design vulnerability. This paper is intended to help protection engineers calculate complex circuit faults like open phase condition using computer program. The impact of this type of
Complex of two-dimensional multigroup programs for neutron-physical computations of nuclear reactor
Karpov, V.A.; Protsenko, A.N.
1975-01-01
Briefly stated mathematical aspects of the two-dimensional multigroup method of neutron-physical computation of nuclear reactor. Problems of algorithmization and BESM-6 computer realisation of multigroup diffuse approximations in hexagonal and rectangular calculated lattices are analysed. The results of computation of fast critical assembly having complicated composition of the core are given. The estimation of computation accuracy of criticality, neutron fields distribution and efficiency of absorbing rods by means of computer programs developed is done. (author)
Buyuk, C; Gunduz, K; Avsever, H
2018-01-01
The aim of this investigation was to evaluate the length, thickness, sagittal and transverse angulations and the morphological variations of the stylohyoid complex (SHC), to assess their probable associations with age and gender, and to investigate the prevalence of it in a wide range of a Turkish sub-population by using cone beam computed tomography (CBCT). The CBCT images of the 1000 patients were evaluated retrospectively. The length, thickness, sagittal and transverse angulations, morphological variations and ossification degrees of SHC were evaluated on multiplanar reconstructions (MPR) adnd three-dimensional (3D) volume rendering (3DVR) images. The data were analysed statistically by using nonparametric tests, Pearson's correlation coefficient, Student's t test, c2 test and one-way ANOVA. Statistical significance was considered at p 35 mm). The mean sagittal angle value was measured to be 72.24° and the mean transverse angle value was 70.81°. Scalariform shape, elongated type and nodular calcification pattern have the highest mean age values between the morphological groups, respectively. Calcified outline was the most prevalent calcification pattern in males. There was no correlation between length and the calcification pattern groups while scalariform shape and pseudoarticular type were the longest variations. We observed that as the anterior sagittal angle gets wider, SHC tends to get longer. The most observed morphological variations were linear shape, elongated type and calcified outline pattern. Detailed studies on the classification will contribute to the literature. (Folia Morphol 2018; 77, 1: 79-89).
A high performance, low power computational platform for complex sensing operations in smart cities
Jiang, Jiming; Claudel, Christian
2017-01-01
This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.
Complex Odontoma: A Case Report with Micro-Computed Tomography Findings
L. A. N. Santos
2016-01-01
Full Text Available Odontomas are the most common benign tumors of odontogenic origin. They are normally diagnosed on routine radiographs, due to the absence of symptoms. Histopathologic evaluation confirms the diagnosis especially in cases of complex odontoma, which may be confused during radiographic examination with an osteoma or other highly calcified bone lesions. The micro-CT is a new technology that enables three-dimensional analysis with better spatial resolution compared with cone beam computed tomography. Another great advantage of this technology is that the sample does not need special preparation or destruction in the sectioned area as in histopathologic evaluation. An odontoma with CBCT and microtomography images is presented in a 26-year-old man. It was first observed on panoramic radiographs and then by CBCT. The lesion and the impacted third molar were surgically excised using a modified Neumann approach. After removal, it was evaluated by histopathology and microtomography to confirm the diagnostic hypothesis. According to the results, micro-CT enabled the assessment of the sample similar to histopathology, without destruction of the sample. With further development, micro-CT could be a powerful diagnostic tool in future research.
A high performance, low power computational platform for complex sensing operations in smart cities
Jiang, Jiming
2017-02-02
This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.
Vincenzo G. Fiore
2017-08-01
Full Text Available The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed
Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features
Miller, E.K.
1990-01-01
There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs
Jastrzebski, Robin; Quesne, Matthew G.; Weckhuysen, Bert M.; de Visser, Sam P.; Bruijnincx, Pieter C. A.
2014-01-01
Catechol intradiol dioxygenation is a unique reaction catalyzed by iron-dependent enzymes and nonheme iron(III) complexes. The mechanism by which these systems activate dioxygen in this important metabolic process remains controversial. Using a combination of kinetic measurements and computational
Bertolotto, D.
2011-11-01
The current doctoral research is focused on the development and validation of a coupled computational tool, to combine the advantages of computational fluid dynamics (CFD) in analyzing complex flow fields and of state-of-the-art system codes employed for nuclear power plant (NPP) simulations. Such a tool can considerably enhance the analysis of NPP transient behavior, e.g. in the case of pressurized water reactor (PWR) accident scenarios such as Main Steam Line Break (MSLB) and boron dilution, in which strong coolant flow asymmetries and multi-dimensional mixing effects strongly influence the reactivity of the reactor core, as described in Chap. 1. To start with, a literature review on code coupling is presented in Chap. 2, together with the corresponding ongoing projects in the international community. Special reference is made to the framework in which this research has been carried out, i.e. the Paul Scherrer Institute's (PSI) project STARS (Steady-state and Transient Analysis Research for the Swiss reactors). In particular, the codes chosen for the coupling, i.e. the CFD code ANSYS CFX V11.0 and the system code US-NRC TRACE V5.0, are part of the STARS codes system. Their main features are also described in Chap. 2. The development of the coupled tool, named CFX/TRACE from the names of the two constitutive codes, has proven to be a complex and broad-based task, and therefore constraints had to be put on the target requirements, while keeping in mind a certain modularity to allow future extensions to be made with minimal efforts. After careful consideration, the coupling was defined to be on-line, parallel and with non-overlapping domains connected by an interface, which was developed through the Parallel Virtual Machines (PVM) software, as described in Chap. 3. Moreover, two numerical coupling schemes were implemented and tested: a sequential explicit scheme and a sequential semi-implicit scheme. Finally, it was decided that the coupling would be single
Quantum chemical investigation of levofloxacin-boron complexes: A computational approach
Sayin, Koray; Karakaş, Duran
2018-04-01
Quantum chemical calculations are performed over some boron complexes with levofloxacin. Boron complex with fluorine atoms are optimized at three different methods (HF, B3LYP and M062X) with 6-31 + G(d) basis set. The best level is determined as M062X/6-31 + G(d) by comparison of experimental and calculated results of complex (1). The other complexes are optimized by using the best level. Structural properties, IR and NMR spectrum are examined in detail. Biological activities of mentioned complexes are investigated by some quantum chemical descriptors and molecular docking analyses. As a result, biological activities of complex (2) and (4) are close to each other and higher than those of other complexes. Additionally, NLO properties of mentioned complexes are investigated by some quantum chemical parameters. It is found that complex (3) is the best candidate for NLO applications.
Al-Ahmary, Khairia M.; Habeeb, Moustafa M.; Al-Obidan, Areej H.
2018-05-01
New charge transfer complex (CTC) between the electron donor 2,3-diaminopyridine (DAP) with the electron acceptor chloranilic (CLA) acid has been synthesized and characterized experimentally and theoretically using a variety of physicochemical techniques. The experimental work included the use of elemental analysis, UV-vis, IR and 1H NMR studies to characterize the complex. Electronic spectra have been carried out in different hydrogen bonded solvents, methanol (MeOH), acetonitrile (AN) and 1:1 mixture from AN-MeOH. The molecular composition of the complex was identified to be 1:1 from Jobs and molar ratio methods. The stability constant was determined using minimum-maximum absorbances method where it recorded high values confirming the high stability of the formed complex. The solid complex was prepared and characterized by elemental analysis that confirmed its formation in 1:1 stoichiometric ratio. Both IR and NMR studies asserted the existence of proton and charge transfers in the formed complex. For supporting the experimental results, DFT computations were carried out using B3LYP/6-31G(d,p) method to compute the optimized structures of the reactants and complex, their geometrical parameters, reactivity parameters, molecular electrostatic potential map and frontier molecular orbitals. The analysis of DFT results strongly confirmed the high stability of the formed complex based on existing charge transfer beside proton transfer hydrogen bonding concordant with experimental results. The origin of electronic spectra was analyzed using TD-DFT method where the observed λmax are strongly consisted with the computed ones. TD-DFT showed the contributed states for various electronic transitions.
Quantum control with noisy fields: computational complexity versus sensitivity to noise
Kallush, S; Khasin, M; Kosloff, R
2014-01-01
A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which has to be suppressed to retain controllability. Can one design control fields such that the effect of noise is negligible on the time-scale of the transformation? This question is intimately related to the fundamental problem of a connection between the computational complexity of the control problem and the sensitivity of the controlled system to noise. The present study considers a paradigm of control, where the Lie-algebraic structure of the control Hamiltonian is fixed, while the size of the system increases with the dimension of the Hilbert space representation of the algebra. We find two types of control tasks, easy and hard. Easy tasks are characterized by a small variance of the evolving state with respect to the operators of the control operators. They are relatively immune to noise and the control field is easy to find. Hard tasks have a large variance, are sensitive to noise and the control field is hard to find. The influence of noise increases with the size of the system, which is measured by the scaling factor N of the largest weight of the representation. For fixed time and control field the ability to control degrades as O(N) for easy tasks and as O(N 2 ) for hard tasks. As a consequence, even in the most favorable estimate, for large quantum systems, generic noise in the controls dominates for a typical class of target transformations, i.e. complete controllability is destroyed by noise. (paper)
Usage of super high speed computer for clarification of complex phenomena
Sekiguchi, Tomotsugu; Sato, Mitsuhisa; Nakata, Hideki; Tatebe, Osami; Takagi, Hiromitsu
1999-01-01
This study aims at construction of an efficient super high speed computer system application environment in response to parallel distributed system with easy transplantation to different computer system and different number by conducting research and development on super high speed computer application technology required for elucidation of complicated phenomenon in elucidation of complicated phenomenon of nuclear power field due to computed scientific method. In order to realize such environment, the Electrotechnical Laboratory has conducted development on Ninf, a network numerical information library. This Ninf system can supply a global network infrastructure for worldwide computing with high performance on further wide range distributed network (G.K.)
Energy conserving numerical methods for the computation of complex vortical flows
Allaneau, Yves
One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our
Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A
2014-02-11
Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.
Computational modelling of oxygenation processes in enzymes and biomimetic model complexes
de Visser, Sam P.; Quesne, Matthew G.; Martin, Bodo; Comba, Peter; Ryde, Ulf
2014-01-01
With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and hig...
Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective
Shuo Gu; Jianfeng Pei
2017-01-01
With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regula...
Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective
Shuo Gu
2017-01-01
Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.
Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.
Gu, Shuo; Pei, Jianfeng
2017-01-01
With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.
Geroyannis, V.S.
1990-01-01
In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs
Nagula, Narsimha; Kunche, Sudeepa; Jaheer, Mohmed; Mudavath, Ravi; Sivan, Sreekanth; Ch, Sarala Devi
2018-01-01
Some novel transition metal [Cu (II), Ni (II) and Co (II)] complexes of nalidixic acid hydrazone have been prepared and characterized by employing spectro-analytical techniques viz: elemental analysis, 1 H-NMR, Mass, UV-Vis, IR, TGA-DTA, SEM-EDX, ESR and Spectrophotometry studies. The HyperChem 7.5 software was used for geometry optimization of title compound in its molecular and ionic forms. Quantum mechanical parameters, contour maps of highest occupied molecular orbitals (HOMO) and lowest unoccupied molecular orbitals (LUMO) and corresponding binding energy values were computed using semi empirical single point PM3 method. The stoichiometric equilibrium studies of metal complexes carried out spectrophotometrically using Job's continuous variation and mole ratio methods inferred formation of 1:2 (ML 2 ) metal complexes in respective systems. The title compound and its metal complexes screened for antibacterial and antifungal properties, exemplified improved activity in metal complexes. The studies of nuclease activity for the cleavage of CT- DNA and MTT assay for in vitro cytotoxic properties involving metal complexes exhibited high activity. In addition, the DNA binding properties of Cu (II), Ni (II) and Co (II) complexes investigated by electronic absorption and fluorescence measurements revealed their good binding ability and commended agreement of K b values obtained from both the techniques. Molecular docking studies were also performed to find the binding affinity of synthesized compounds with DNA (PDB ID: 1N37) and "Thymidine phosphorylase from E.coli" (PDB ID: 4EAF) protein targets.
Hernández Valdés, Daniel; Rodríguez Riera, Zalua; Jáuregui Haza, Ulises; Díaz García, Alicia; Benoist, Eric
2016-01-01
The development of novel radiopharmaceuticals in nuclear medicine based on the M(CO)3 (M = Tc, Re) complexes has attracted great attention1. The versatility of this core and the easy production of the fac-[M(CO)3(H 2 O) 3 ]+ precursor could explain this interest2,3. The main characteristics of these tricarbonyl complexes are a high substitution stability of the three CO ligands and a corresponding lability of the coordinated water molecules, yielding, via easy exchange of a variety of mono-, bi-, and tridentate ligands, complexes of very high kinetic stability. A computational study of different tricarbonyl complexes for Re(I) and Tc(I) has been performed using density functional theory. The solvent effect was simulated using the polarizable continuum model. The fully optimized complexes show geometries that compare favorably with the X-ray data. These structures were used as a starting point to investigate the relative stability of tricarbonyl complexes with various tridentate ligands. They comprise an iminodiacetic acid unit for tridentate coordination to the fac-[M(CO) 3 ]+ moiety (M = Re, Tc), an aromatic ring system bearing a functional group (NO 2 -, NH 2 - and Cl-) as linking site model, and a tethering moiety (methylene, ethylene, propylene butylene or pentylene bridge) between the linking and coordinating sites. In general, Re complexes are more stables than the corresponding Tc complexes. Furthermore, the NH2 functional group, medium length in the carbon chain and meta substitution increase the stability of the complexes. The correlation of these results with the available experimental4 data on these systems allows bringing some understanding of the chemistry of tricarbonyl complexes. (author)
Susan A. Yoon
2016-12-01
Full Text Available We present a curriculum and instruction framework for computer-supported teaching and learning about complex systems in high school science classrooms. This work responds to a need in K-12 science education research and practice for the articulation of design features for classroom instruction that can address the Next Generation Science Standards (NGSS recently launched in the USA. We outline the features of the framework, including curricular relevance, cognitively rich pedagogies, computational tools for teaching and learning, and the development of content expertise, and provide examples of how the framework is translated into practice. We follow this up with evidence from a preliminary study conducted with 10 teachers and 361 students, aimed at understanding the extent to which students learned from the activities. Results demonstrated gains in students’ complex systems understanding and biology content knowledge. In interviews, students identified influences of various aspects of the curriculum and instruction framework on their learning.
Dynamics of complexation of a charged dendrimer by linear polyelectrolyte: Computer modelling
Lyulin, S.V.; Darinskii, A.A.; Lyulin, A.V.
2007-01-01
Brownian-dynamics simulations have been performed for complexes formed by a charged dendrimer and a long oppositely charged linear polyelectrolyte when overcharging phenomenon is always observed. After a complex formation the orientational mobility of the individual dendrimer bonds, the fluctuations
Software Alchemy: Turning Complex Statistical Computations into Embarrassingly-Parallel Ones
Norman Matloff
2016-07-01
Full Text Available The growth in the use of computationally intensive statistical procedures, especially with big data, has necessitated the usage of parallel computation on diverse platforms such as multicore, GPUs, clusters and clouds. However, slowdown due to interprocess communication costs typically limits such methods to "embarrassingly parallel" (EP algorithms, especially on non-shared memory platforms. This paper develops a broadlyapplicable method for converting many non-EP algorithms into statistically equivalent EP ones. The method is shown to yield excellent levels of speedup for a variety of statistical computations. It also overcomes certain problems of memory limitations.
Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.
2011-01-01
The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.
A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix
Godfrin, Elena
1990-01-01
This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)
Calo, Victor M.
2011-05-14
The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.
Morgunov, V.G.; Kravchenko, Eh.A.
1988-01-01
Complex, designed to conduct investigations by means of nuclear quadrupole resonance (NQR) method, which includes radiospectrometer, multichannel spectrum analyzer and ISKRA 226.6 personal computer, is developed. Analog-to-digital converter (ADC) with buffer storage device, interface and microcomputer are used to process NQR-signals. ADS conversion time is no more, than 50 ns, linearity - 1%. Programs on Fourier analysis of NQR-signals and calculation of relaxation times are developed
Ganguly, Niloy; Mukherjee, Animesh
2009-01-01
This self-contained book systematically explores the statistical dynamics on and of complex networks having relevance across a large number of scientific disciplines. The theories related to complex networks are increasingly being used by researchers for their usefulness in harnessing the most difficult problems of a particular discipline. The book is a collection of surveys and cutting-edge research contributions exploring the interdisciplinary relationship of dynamics on and of complex networks. Towards this goal, the work is thematically organized into three main sections: Part I studies th
Chen, Y.; Fischer, U.
2005-01-01
Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)
Torralba, Marta; Jiménez, Roberto; Yagüe-Fabra, José A.
2018-01-01
micro-geometries as well (i.e., in the sub-mm dimensional range). However, there are different factors that may influence the CT process performance, being one of them the surface extraction technique used. In this paper, two different extraction techniques are applied to measure a complex miniaturized......The number of industrial applications of computed tomography (CT) for dimensional metrology in 100–103 mm range has been continuously increasing, especially in the last years. Due to its specific characteristics, CT has the potential to be employed as a viable solution for measuring 3D complex...... dental file by CT in order to analyze its contribution to the final measurement uncertainty in complex geometries at the mm to sub-mm scales. The first method is based on a similarity analysis: the threshold determination; while the second one is based on a gradient or discontinuity analysis: the 3D...
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
Mlambo, CS
2015-01-01
Full Text Available In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based...
Moulin, Solenne; Dentel, Hé lè ne; Pagnoux-Ozherelyeva, Anastassiya; Gaillard, Sylvain; Poater, Albert; Cavallo, Luigi; Lohier, Jean Franç ois; Renaud, Jean Luc
2013-01-01
. Festival of amination: Two series of modified Knölker's complexes were synthesised and applied in the reductive amination of various carbonyl derivatives with primary or secondary amines (see scheme, TIPS = triisopropylsilyl). For a mechanistic insight
Computational study of formamide-water complexes using the SAPT and AIM methods
Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.
2006-01-01
In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character
Moulin, Solenne
2013-11-15
Reductive amination under hydrogen pressure is a valuable process in organic chemistry to access amine derivatives from aldehydes or ketones. Knölker\\'s complex has been shown to be an efficient iron catalyst in this reaction. To determine the influence of the substituents on the cyclopentadienone ancillary ligand, a series of modified Knölker\\'s complexes was synthesised and fully characterised. These complexes were also transformed into their analogous acetonitrile iron-dicarbonyl complexes. Catalytic activities of these complexes were evaluated and compared in a model reaction. The scope of this reaction is also reported. For mechanistic insights, deuterium-labelling experiments and DFT calculations were undertaken and are also presented. Festival of amination: Two series of modified Knölker\\'s complexes were synthesised and applied in the reductive amination of various carbonyl derivatives with primary or secondary amines (see scheme, TIPS = triisopropylsilyl). For a mechanistic insight, deuterium-labelling experiments and DFT calculations were undertaken and are also presented. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
On the Complexity of Computing Optimal Private Park-and-Ride Plans
Olsen, Martin
2013-01-01
or size of the parties inputs. This is despite the fact that in many cases the size of a party’s input can be confidential. The reason for this omission seems to have been the folklore belief that, as with encryption, it is impossible to carry out non-trivial secure computation while hiding the size...... that the folklore belief may not be fully accurate. In this work, we initiate a theoretical study of input-size hiding secure computation, and focus on the two-party case. We present definitions for this task, and deal with the subtleties that arise in the setting where there is no a priori polynomial bound...
Bareiss, E.H.
1976-05-01
The objectives of the work are to develop mathematically and computationally founded for the design of highly efficient and reliable multidimensional neutron transport codes to solve a variety of neutron migration and radiation problems, and to analyze existing and new methods for performance. As new analytical insights are gained, new numerical methods are developed and tested. Significant results obtained include implementation of the integer-preserving Gaussian elimination method (two-step method) in a CDC 6400 computer code, modes analysis for one-dimensional transport solutions, and a new method for solving the 1-T transport equation. Some of the work dealt with the interface and corner problem in diffusion theory
Blikstein, Paulo; Worsley, Marcelo
2016-01-01
New high-frequency multimodal data collection technologies and machine learning analysis techniques could offer new insights into learning, especially when students have the opportunity to generate unique, personalized artifacts, such as computer programs, robots, and solutions engineering challenges. To date most of the work on learning analytics…
Bareiss, E.H.
1975-01-01
The objectives of the research remain the same as outlined in the original proposal. They are in short as follows: Develop mathematically and computationally founded criteria for the design of highly efficient and reliable multi-dimensional neutron transport codes to solve a variety of neutron migration and radiation problems and analyze existing and new methods for performance. (U.S.)
High level language for measurement complex control based on the computer E-100I
Zubkov, B. V.
1980-01-01
A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.
A Framework for the Design of Computer-Assisted Simulation Training for Complex Police Situations
Söderström, Tor; Åström, Jan; Anderson, Greg; Bowles, Ron
2014-01-01
Purpose: The purpose of this paper is to report progress concerning the design of a computer-assisted simulation training (CAST) platform for developing decision-making skills in police students. The overarching aim is to outline a theoretical framework for the design of CAST to facilitate police students' development of search techniques in…
A new fast algorithm for computing a complex number: Theoretic transforms
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
Bareiss, E.H.
1977-08-01
The objectives of this research are to develop mathematically and computationally founded criteria for the design of highly efficient and reliable multidimensional neutron transport codes to solve a variety of neutron migration and radiation problems, and to analyze existing and new methods for performance
Gordon, S.; Mcbride, B.; Zeleznik, F. J.
1984-01-01
An addition to the computer program of NASA SP-273 is given that permits transport property calculations for the gaseous phase. Approximate mixture formulas are used to obtain viscosity and frozen thermal conductivity. Reaction thermal conductivity is obtained by the same method as in NASA TN D-7056. Transport properties for 154 gaseous species were selected for use with the program.
Liebich, R. E.; Chang, Y.-S.; Chun, K. C.
2000-03-31
The relative audibility of multiple sounds occurs in separate, independent channels (frequency bands) termed critical bands or equivalent rectangular (filter-response) bandwidths (ERBs) of frequency. The true nature of human hearing is a function of a complex combination of subjective factors, both auditory and nonauditory. Assessment of the probability of individual annoyance, community-complaint reaction levels, speech intelligibility, and the most cost-effective mitigation actions requires sensation-level data; these data are one of the most important auditory factors. However, sensation levels cannot be calculated by using single-number, A-weighted sound level values. This paper describes specific steps to compute sensation levels. A unique, newly developed procedure is used, which simplifies and improves the accuracy of such computations by the use of maximum sensation levels that occur, for each intrusive-sound spectrum, within each ERB. The newly developed program ENAUDIBL makes use of ERB sensation-level values generated with some computational subroutines developed for the formerly documented program SPECTRAN.
Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven
2010-11-01
The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.
Dumas, Helene M; Fragala-Pinkham, Maria A; Rosen, Elaine L; O'Brien, Jane E
2017-11-01
To assess construct (convergent and divergent) validity of the Pediatric Evaluation of Disability Inventory Computer Adaptive Test (PEDI-CAT) in a sample of children with complex medical conditions. Demographics, clinical information, PEDI-CAT normative score, and the Post-Acute Acuity Rating for Children (PAARC) level were collected for all post-acute hospital admissions (n = 110) from 1 April 2015 to 1 March 2016. Correlations between the PEDI-CAT Daily Activities, Mobility, and Social/Cognitive domain scores for the total sample and across three age groups (infant, preschool, and school-age) were calculated. Differences in mean PEDI-CAT scores for each domain across two groups, children with "Less Complexity," or "More Complexity" based on PAARC level were examined. All correlations for the total sample and age subgroups were statistically significant and trends across age groups were evident with the stronger associations between domains for the infant group. Significant differences were found between mean PEDI-CAT Daily Activities, Mobility, and Social/Cognitive normative scores across the two complexity groups with children in the "Less Complex" group having higher PEDI-CAT scores for all domains. This study provides evidence indicating the PEDI-CAT can be used with confidence in capturing and differentiating children's level of function in a post-acute care setting. Implications for Rehabilitation The PEDI-CAT is measure of function for children with a variety of conditions and can be used in any clinical setting. Convergent validity of the PEDI-CAT's Daily Activities, Mobility, and Social/Cognitive domains was significant and particularly strong for infants and young children with medical complexity. The PEDI-CAT was able to discriminate groups of children with differing levels of medical complexity admitted to a pediatric post-acute care hospital.
Torres-Pomales, Wilfredo
2014-01-01
A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.
Syntactic Complexity Metrics and the Readability of Programs in a Functional Computer Language
van den Berg, Klaas; Engel, F.L.; Bouwhuis, D.G.; Bosser, T.; d'Ydewalle, G.
This article reports on the defintion and the measutement of the software complexity metrics of Halstead and McCabe for programs written in the functional programming language Miranda. An automated measurement of these metrics is described. In a case study, the correlation is established between the
A modal perspective on the computational complexity of attribute value grammar
Blackburn, P.; Spaan, E.
Many of the formalisms used in Attribute Value grammar are notational variants of languages of propositional modal logic and testing whether two Attribute Value descriptions unify amounts to testing for modal satisfiablity. In this paper we put this observation to work. We study the complexity of
Stability of polycation-DNA complexes: comparison of computer model and experimental data
Dybal, Jiří; Huml, Karel; Kabeláč, Martin; Reschel, Tomáš; Ulbrich, Karel
2004-01-01
Roč. 11, č. 1 (2004), s. 3-6 ISSN 1211-5894 R&D Projects: GA AV ČR KSK4055109 Institutional research plan: CEZ:AV0Z4050913 Keywords : polycation-DNA complexes * gene delivery * quantum mechanical calculations Subject RIV: CC - Organic Chemistry
Kristensen, Jesper Michael; Fitzek, Frank H. P.; Koch, Peter
2005-01-01
the expected increase in complexity leading to a decrease in energy efficiency, cooperative wireless networks are introduced. Cooperative wireless networks enables the concept of resource sharing. Resource sharing is interpreted as collaborative signal processing. This interpretation leads to the concept...... of a distributed signal processor. OFDM and the principle of FFT is described as an example of collaborative signal processing....
Sarkar, P.; Mondal, M. K.; Sarmah, Amrit; Maity, S.; Mukherjee, C.
2017-01-01
Roč. 56, č. 14 (2017), s. 8068-8077 ISSN 0020-1669 Institutional support: RVO:61388963 Keywords : transition metal complexes * induced electron transfer * non-innocent ligands Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 4.857, year: 2016
Kingston, Greer B.; Rajabali Nejad, Mohammadreza; Gouldby, Ben P.; van Gelder, Pieter H.A.J.M.
2011-01-01
With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex
Deconvoluting the Complexity of Bone Metastatic Prostate Cancer via Computational Modeling
2016-09-01
fluent in English and Spanish and have experience in IT, programming, cell culture, teaching , art, graphic design, and leadership skills. Personal...2007) Teacher programme for the translation of science into classrooms University of Notre Dame, Indiana, USA (2004-2005) One-Year Academic Exchange...integrated computational modeling approach can be used to predict the temporal behavior of bone metastatic prostate cancer heterogeneous for TGFβ and
Monitoring performance of a highly distributed and complex computing infrastructure in LHCb
Mathe, Z.; Haen, C.; Stagni, F.
2017-10-01
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.
Yang, Tzuhsiung; Berry, John F
2018-06-04
The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.
Poongavanam, Vasanthanathan; Olsen, Lars; Jørgensen, Flemming Steen
2010-01-01
, and methods based on statistical mechanics. In the present investigation, we started from an LIE model to predict the binding free energy of structurally diverse compounds of cytochrome P450 1A2 ligands, one of the important human metabolizing isoforms of the cytochrome P450 family. The data set includes both...... substrates and inhibitors. It appears that the electrostatic contribution to the binding free energy becomes negligible in this particular protein and a simple empirical model was derived, based on a training set of eight compounds. The root mean square error for the training set was 3.7 kJ/mol. Subsequent......Predicting binding affinities for receptor-ligand complexes is still one of the challenging processes in computational structure-based ligand design. Many computational methods have been developed to achieve this goal, such as docking and scoring methods, the linear interaction energy (LIE) method...
Xiang Ding
2014-01-01
Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.
A modal perspective on the computational complexity of attribute value grammar
Blackburn, P.; Spaan, E.
1992-01-01
Many of the formalisms used in Attribute Value grammar are notational variants of languages of propositional modal logic and testing whether two Attribute Value descriptions unify amounts to testing for modal satisfiablity. In this paper we put this observation to work. We study the complexity of the satisfiability problem for nine modal languages which mirror different aspects of AVS description formalisms, including the ability to express re-eintrancy, the ability to express generalisations...
Brown, D L
2009-05-01
Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex
Brown, D.L.
2009-01-01
Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems
Rao, V. S. R.; Biswas, Margaret; Mukhopadhyay, Chaitali; Balaji, P. V.
1989-03-01
The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands. The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β- L-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α- D-glucopyranoside and methyl-2,3-dimethyl-α- D-glucopyranoside which explain well the available experimental data in solution.
Computational Study on Mössbauer Isomer Shifts of Some Organic-neptunium (IV Complexes
Masashi Kaneko
2015-12-01
Full Text Available Relativistic DFT calculations are applied to some organo-neptunium (IV complexes, Cp3NpIVX (Cp = η5-C5H5; X = BH4, Cl, OtBu, Ph, nBu, in order to understand their bonding properties between Np and the ligands. We employ scalar-relativistic ZORA Hamiltonian with all-electron basis set (SARC. The calculated electron densities at Np nucleus position in the complexes at B2PLYP / SARC theory strongly correlate to the experimental Mössbauer isomer shifts of 237Np system. The result of bond overlap population analysis indicates that the bonding strength decreases in order of X = BH4, Cl, OtBu, Ph and nBu. The tendency depends on the degree of the covalent interaction between Np 5f-electron and X ligand. It is suggested that it is important to estimate the bonding contribution of 5f-orbital to understand the electronic state for organo-actinide complexes.
Sumlin, Benjamin J.; Heinson, William R.; Chakrabarty, Rajan K.
2018-01-01
The complex refractive index m = n + ik of a particle is an intrinsic property which cannot be directly measured; it must be inferred from its extrinsic properties such as the scattering and absorption cross-sections. Bohren and Huffman called this approach "describing the dragon from its tracks", since the inversion of Lorenz-Mie theory equations is intractable without the use of computers. This article describes PyMieScatt, an open-source module for Python that contains functionality for solving the inverse problem for complex m using extensive optical and physical properties as input, and calculating regions where valid solutions may exist within the error bounds of laboratory measurements. Additionally, the module has comprehensive capabilities for studying homogeneous and coated single spheres, as well as ensembles of homogeneous spheres with user-defined size distributions, making it a complete tool for studying the optical behavior of spherical particles.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and
Radtke, Alexandra; Morello, Samantha; Muir, Peter; Arnoldy, Courtney; Bleedorn, Jason
2017-11-01
To report the evaluation, surgical planning, and outcome for correction of a complex limb deformity in the tibia of a donkey using computed tomographic (CT) imaging and a 3D bone model. Case report. A 1.5-year-old, 110 kg donkey colt with an angular and torsional deformity of the right pelvic limb. Findings on physical examimation included a severe, complex deformity of the right pelvic limb that substantially impeded ambulation. Both hind limbs were imaged via CT, and imaging software was used to characterize the bone deformity. A custom stereolithographic bone model was printed for preoperative planning and rehersal of the surgery. A closing wedge ostectomy with de-rotation of the tibia was stabilized with 2 precontoured 3.5-mm locking compression plates. Clinical follow-up was available for 3.5 years postoperatively. CT allowed characterization of the angular and torsional bone deformity of the right tibia. A custom bone model facilitated surgical planning and rehearsal of the procedure. Tibial corrective ostectomy was performed without complication. Postoperative management included physical rehabilitation to help restore muscular function and pelvic limb mechanics. Short-term and long-term follow-up confirmed bone healing and excellent clinical function. CT imaging and stereolithography facilitated the evaluation and surgical planning of a complex limb deformity. This combination of techniques may improve the accuracy of the surgeons' evaluation of complex bone deformities in large animals, shorten operating times, and improve outcomes. © 2017 The American College of Veterinary Surgeons.
Lim, Sin Liang; Koo, Voon Chet; Daya Sagar, B.S.
2009-01-01
Multiscale convexity analysis of certain fractal binary objects-like 8-segment Koch quadric, Koch triadic, and random Koch quadric and triadic islands-is performed via (i) morphologic openings with respect to recursively changing the size of a template, and (ii) construction of convex hulls through half-plane closings. Based on scale vs convexity measure relationship, transition levels between the morphologic regimes are determined as crossover scales. These crossover scales are taken as the basis to segment binary fractal objects into various morphologically prominent zones. Each segmented zone is characterized through normalized morphologic complexity measures. Despite the fact that there is no notably significant relationship between the zone-wise complexity measures and fractal dimensions computed by conventional box counting method, fractal objects-whether they are generated deterministically or by introducing randomness-possess morphologically significant sub-zones with varied degrees of spatial complexities. Classification of realistic fractal sets and/or fields according to sub-zones possessing varied degrees of spatial complexities provides insight to explore links with the physical processes involved in the formation of fractal-like phenomena.
Price computation in electricity auctions with complex rules: An analysis of investment signals
Vazquez, Carlos; Hallack, Michelle; Vazquez, Miguel
2017-01-01
This paper discusses the problem of defining marginal costs when integer variables are present, in the context of short-term power auctions. Most of the proposals for price computation existing in the literature are concerned with short-term competitive equilibrium (generators should not be willing to change the dispatch assigned to them by the auctioneer), which implies operational-cost recovery for all of the generators accepted in the auction. However, this is in general not enough to choose between the different pricing schemes. We propose to include an additional criterion in order to discriminate among different pricing schemes: prices have to be also signals for generation expansion. Using this condition, we arrive to a single solution to the problem of defining prices, where they are computed as the shadow prices of the balance equations in a linear version of the unit commitment problem. Importantly, not every linearization of the unit commitment is valid; we develop the conditions for this linear model to provide adequate investment signals. Compared to other proposals in the literature, our results provide a strong motivation for the pricing scheme and a simple method for price computation. - Highlights: • Pricing proposals in power markets often deal with just accounting-cost recovery. • Including opportunity costs is an additional property required for efficient pricing. • We develop a framework to analyze the pricing proposals found in the literature. • We propose a pricing mechanism to include the costs of short-run integer decisions. • As it includes short-run opportunity costs, it provides efficient long-term signals.
FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.
Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora
2013-09-01
In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.
The PDP-10 computer at the hearth of the ERASME complex.
CERN PhotoLab
1974-01-01
ERASME was an instrument developed at CERN for the analysis of the 70 mm film from the 3.7 m bubble chamber, BEBC, but was also used at an early stage with the 50 mm film from the 2 m bubble chamber. Not a fully automated instrument, it could perform the full sequence of operations (scanning and measurement) making large use of the operator intervention in both stages. There were at the end five ERASME units each one with its own PDP-11 computer, with 8 K words of 16-bit memory, linked to a common PDP-10 by an interface. (See CERN Courier 14 (1974) p. 48.)
Stream computing for biomedical signal processing: A QRS complex detection case-study.
Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P
2015-01-01
Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.
Bounding the Computational Complexity of Flowchart Programs with Multi-dimensional Rankings
Alias , Christophe; Darte , Alain; Feautrier , Paul; Gonnord , Laure
2010-01-01
Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Our first contribution is to propose an efficient algorithm to com...
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
Partha Maji
2018-04-01
Full Text Available Deep convolutional neural networks (ConvNets, which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen
2014-04-04
A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be
Matsunaga, Takeshi; Hakanson, Lars
2010-09-01
The Research Group for Environmental Science of JAEA held a meeting on computational and experimental studies for modeling of radionuclide migration in complex aquatic ecosystems during November 16-20 of 2009. The aim was to discuss the relevance of various computational and experimental approaches to that modeling. The meeting was attended by a Swedish researcher, Prof. Dr. Lars Hakanson of Uppsala University. It included a joint talk at the Institute for Environmental Sciences, in addition to a field and facility survey of the JNFL commercial reprocessing plant located in Rokkasho, Aomori. The meeting demonstrated that it is crucial 1) to make a model structure be strictly relevant to the target objectives of a study and 2) to account for inherent fluctuations in target values in nature in a manner of qualitative parameterization. Moreover, it was confirmed that there can be multiple different approaches of modeling (e.g. detailed or simplified) with relevance for the objectives of a study. These discussions should be considered in model integration for complex aquatic ecosystems consisting catchments, rivers, lakes and coastal oceans which can interact with the atmosphere. This report compiles research subjects and lectures presented at the meeting with associated discussions. The 10 of the presented papers indexed individually. (J.P.N.)
Bandiera, Laura; Bagli, Enrico; Guidi, Vincenzo [INFN Sezione di Ferrara and Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, 44121 Ferrara (Italy); Tikhomirov, Victor V. [Research Institute for Nuclear Problems, Belarusian State University, Minsk (Belarus)
2015-07-15
The analytical theories of coherent bremsstrahlung and channeling radiation well describe the process of radiation generation in crystals under some special cases. However, the treatment of complex situations requires the usage of a more general approach. In this report we present a C++ routine, named RADCHARM++, to compute the electromagnetic radiation emitted by electrons and positrons in crystals and complex structures. In the RADCHARM++ routine, the model for the computation of e.m. radiation generation is based on the direct integration of the quasiclassical formula of Baier and Katkov. This approach allows one taking into account real trajectories, and thereby the contribution of incoherent scattering. Such contribution can be very important in many cases, for instance for electron channeling. The generality of the Baier–Katkov operator method permits one to simulate the electromagnetic radiation emitted by electrons/positrons in very different cases, e.g., in straight, bent and periodically bent crystals, and for different beam energy ranges, from sub-GeV to TeV and above. The RADCHARM++ routine has been implemented in the Monte Carlo code DYNECHARM++, which solves the classical equation of motion of charged particles traveling through a crystal under the continuum potential approximation. The code has proved to reproduce the results of experiments performed at the MAinzer MIkrotron (MAMI) with 855 MeV electrons and has been used to predict the radiation spectrum generated by the same electron beam in a bent crystal.
Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave
2011-01-01
The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (pgenerated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed.
CoreFlow: A computational platform for integration, analysis and modeling of complex biological data
Pasculescu, Adrian; Schoof, Erwin; Creixell, Pau
2014-01-01
between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion......A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which...... provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts...
Optimization of stochastic discrete systems and control on complex networks computational networks
Lozovanu, Dmitrii
2014-01-01
This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...
Lee, Hyung B.; Ghia, Urmila; Bayyuk, Sami; Oberkampf, William L.; Roy, Christopher J.; Benek, John A.; Rumsey, Christopher L.; Powers, Joseph M.; Bush, Robert H.; Mani, Mortaza
2016-01-01
Computational fluid dynamics (CFD) and other advanced modeling and simulation (M&S) methods are increasingly relied on for predictive performance, reliability and safety of engineering systems. Analysts, designers, decision makers, and project managers, who must depend on simulation, need practical techniques and methods for assessing simulation credibility. The AIAA Guide for Verification and Validation of Computational Fluid Dynamics Simulations (AIAA G-077-1998 (2002)), originally published in 1998, was the first engineering standards document available to the engineering community for verification and validation (V&V) of simulations. Much progress has been made in these areas since 1998. The AIAA Committee on Standards for CFD is currently updating this Guide to incorporate in it the important developments that have taken place in V&V concepts, methods, and practices, particularly with regard to the broader context of predictive capability and uncertainty quantification (UQ) methods and approaches. This paper will provide an overview of the changes and extensions currently underway to update the AIAA Guide. Specifically, a framework for predictive capability will be described for incorporating a wide range of error and uncertainty sources identified during the modeling, verification, and validation processes, with the goal of estimating the total prediction uncertainty of the simulation. The Guide's goal is to provide a foundation for understanding and addressing major issues and concepts in predictive CFD. However, this Guide will not recommend specific approaches in these areas as the field is rapidly evolving. It is hoped that the guidelines provided in this paper, and explained in more detail in the Guide, will aid in the research, development, and use of CFD in engineering decision-making.
Computer-based diagnostic monitoring to enhance the human-machine interface of complex processes
Kim, I.S.
1992-02-01
There is a growing interest in introducing an automated, on-line, diagnostic monitoring function into the human-machine interfaces (HMIs) or control rooms of complex process plants. The design of such a system should be properly integrated with other HMI systems in the control room, such as the alarms system or the Safety Parameter Display System (SPDS). This paper provides a conceptual foundation for the development of a Plant-wide Diagnostic Monitoring System (PDMS), along with functional requirements for the system and other advanced HMI systems. Insights are presented into the design of an efficient and robust PDMS, which were gained from a critical review of various methodologies developed in the nuclear power industry, the chemical process industry, and the space technological community
Using virtual humans and computer animations to learn complex motor skills: a case study in karate
Spanlang Bernhard
2011-12-01
Full Text Available Learning motor skills is a complex task involving a lot of cognitive issues. One of the main issues consists in retrieving the relevant information from the learning environment. In a traditional learning situation, a teacher gives oral explanations and performs actions to provide the learner with visual examples. Using virtual reality (VR as a tool for learning motor tasks is promising. However, it raises questions about the type of information this kind of environments can offer. In this paper, we propose to analyze the impact of virtual humans on the perception of the learners. As a case study, we propose to apply this research problem to karate gestures. The results of this study show no signiﬁcant difference on the after training performance of learners confronted to three different learning environments (traditional group, video and VR.
Baldenebro-López, Jesús; Castorena-González, José; Flores-Holguín, Norma; Almaral-Sánchez, Jorge; Glossman-Mitnik, Daniel
2012-01-01
In this work, we studied a copper complex-based dye, which is proposed for potential photovoltaic applications and is named Cu (I) biquinoline dye. Results of electron affinities and ionization potentials have been used for the correlation between different levels of calculation used in this study, which are based on The Density Functional Theory (DFT) and time-dependent (TD) DFT. Further, the maximum absorption wavelengths of our theoretical calculations were compared with the experimental data. It was found that the M06/LANL2DZ + DZVP level of calculation provides the best approximation. This level of calculation was used to find the optimized molecular structure and to predict the main molecular vibrations, the molecular orbitals energies, dipole moment, isotropic polarizability and the chemical reactivity parameters that arise from Conceptual DFT. PMID:23443107
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
Positron computed tomography studies of cerebral metabolic responses to complex motor tasks
Phelps, M.E.; Mazziotta, J.C.
1984-01-01
Human motor system organization was explored in 8 right-handed male subjects using /sup 18/F-fluorodeoxyglucose and positron computed tomography to measure cerebral glucose metabolism. Five subjects had triple studies (eyes closed) including: control (hold pen in right hand without moving), normal size writing (subject repeatedly writes name) and large (10-15 X normal) name writing. In these studies normal and large size writing had a similar distribution of metabolic responses when compared to control studies. Activations (percent change from control) were in the range of 12-20% and occurred in the striatum bilaterally > contralateral Rolandic cortex > contralateral thalamus. No significant activations were observed in the ipsilateral thalamus, Rolandic cortex or cerebellum (supplementary motor cortex was not examined). The magnitude of the metabolic response in the striatum was greater with the large versus normal sized writing. This differential response may be due to an increased number and topographic distribution of neurons responding with the same average activity between tasks or an increase in the functional activity of the same neuronal population between the two tasks (present spatial resolution inadequate to differentiate). When subjects (N=3) performed novel sequential finger movements, the maximal metabolic response was in the contralateral Rolandic cortex > striatum. Such studies provide a means of exploring human motor system organization, motor learning and provide a basis for examining patients with motor system disorders
Experimental and computational fluid dynamic studies of mixing for complex oral health products
Garcia, Marti Cortada; Mazzei, Luca; Angeli, Panagiota
2015-11-01
Mixing high viscous non-Newtonian fluids is common in the consumer health industry. Sometimes this process is empirical and involves many pilot plants trials which are product specific. The first step to study the mixing process is to build on knowledge on the rheology of the fluids involved. In this research a systematic approach is used to validate the rheology of two liquids: glycerol and a gel formed by polyethylene glycol and carbopol. Initially, the constitutive equation is determined which relates the viscosity of the fluids with temperature, shear rate, and concentration. The key variable for the validation is the power required for mixing, which can be obtained both from CFD and experimentally using a stirred tank and impeller of well-defined geometries at different impeller speeds. A good agreement between the two values indicates a successful validation of the rheology and allows the CFD model to be used for the study of mixing in the complex vessel geometries and increased sizes encountered during scale up.
Thirumurugan, R.; Anitha, K.
2017-10-01
A new organic proton transfer complex of creatininium 4-nitrobenzoate (C4NB) has been synthesized and its single crystals were grown successfully by slow evaporation technique. The grown single crystal was subjected to various characterization techniques like single crystal X-ray diffraction (SCXRD), FTIR, FT-Raman and Kurtz-Perry powder second harmonic generation (SHG). The SCXRD analysis revealed that C4NB was crystallized into orthorhombic crystal system, with noncentrosymmetric (NCS), P212121 space group. The creatininium cation and 4-nitrobenzoate anion were connected through a pair of N__H⋯O hydrogen bonds (N(3)__H(6) ⋯ O(3) (x+1, y, z) and N(2)__H(5) &ctdot O(2) (x-1/2, -y-1/2, -z+2)) and fashioned a R22(8) ring motif. The crystal structure was stabilized by strong N__H⋯O and weak C__H⋯O intermolecular interactions and it was quantitatively analysed by Hirshfeld surface and fingerprint (FP) analysis. FTIR and FT-Raman studies confirmed the vibrational modes of functional groups present in C4NB compound indubitably. SHG efficiency of grown crystal was 4.6 times greater than that of standard potassium dihydrogen phosphate (KDP) material. Moreover, density functional theory (DFT) studies such as Mulliken charge distribution, frontier molecular orbitals (FMOs), molecular electrostatic potential (MEP) map, natural bond orbital analysis (NBO) and first order hyperpolarizability (β0) were calculated to explore the structure-property relationship.
SWIM: a computational tool to unveiling crucial nodes in complex biological networks.
Paci, Paola; Colombo, Teresa; Fiscon, Giulia; Gurtner, Aymone; Pavesi, Giulio; Farina, Lorenzo
2017-03-20
SWItchMiner (SWIM) is a wizard-like software implementation of a procedure, previously described, able to extract information contained in complex networks. Specifically, SWIM allows unearthing the existence of a new class of hubs, called "fight-club hubs", characterized by a marked negative correlation with their first nearest neighbors. Among them, a special subset of genes, called "switch genes", appears to be characterized by an unusual pattern of intra- and inter-module connections that confers them a crucial topological role, interestingly mirrored by the evidence of their clinic-biological relevance. Here, we applied SWIM to a large panel of cancer datasets from The Cancer Genome Atlas, in order to highlight switch genes that could be critically associated with the drastic changes in the physiological state of cells or tissues induced by the cancer development. We discovered that switch genes are found in all cancers we studied and they encompass protein coding genes and non-coding RNAs, recovering many known key cancer players but also many new potential biomarkers not yet characterized in cancer context. Furthermore, SWIM is amenable to detect switch genes in different organisms and cell conditions, with the potential to uncover important players in biologically relevant scenarios, including but not limited to human cancer.
Computational discovery of picomolar Q(o) site inhibitors of cytochrome bc1 complex.
Hao, Ge-Fei; Wang, Fu; Li, Hui; Zhu, Xiao-Lei; Yang, Wen-Chao; Huang, Li-Shar; Wu, Jia-Wei; Berry, Edward A; Yang, Guang-Fu
2012-07-11
A critical challenge to the fragment-based drug discovery (FBDD) is its low-throughput nature due to the necessity of biophysical method-based fragment screening. Herein, a method of pharmacophore-linked fragment virtual screening (PFVS) was successfully developed. Its application yielded the first picomolar-range Q(o) site inhibitors of the cytochrome bc(1) complex, an important membrane protein for drug and fungicide discovery. Compared with the original hit compound 4 (K(i) = 881.80 nM, porcine bc(1)), the most potent compound 4f displayed 20 507-fold improved binding affinity (K(i) = 43.00 pM). Compound 4f was proved to be a noncompetitive inhibitor with respect to the substrate cytochrome c, but a competitive inhibitor with respect to the substrate ubiquinol. Additionally, we determined the crystal structure of compound 4e (K(i) = 83.00 pM) bound to the chicken bc(1) at 2.70 Å resolution, providing a molecular basis for understanding its ultrapotency. To our knowledge, this study is the first application of the FBDD method in the discovery of picomolar inhibitors of a membrane protein. This work demonstrates that the novel PFVS approach is a high-throughput drug discovery method, independent of biophysical screening techniques.
Torras, Juan; Zanuy, David; Bertran, Oscar; Alemán, Carlos; Puiggalí, Jordi; Turón, Pau; Revilla-López, Guillem
2018-02-01
The study of material science has been long devoted to the disentanglement of bulk structures which mainly entails finding the inner structure of materials. That structure is accountable for a major portion of materials' properties. Yet, as our knowledge of these "backbones" enlarged so did the interest for the materials' boundaries properties which means the properties at the frontier with the surrounding environment that is called interface. The interface is thus to be understood as the sum of the material's surface plus the surrounding environment be it in solid, liquid or gas phase. The study of phenomena at this interface requires both the use of experimental and theoretical techniques and, above all, a wise combination of them in order to shed light over the most intimate details at atomic, molecular and mesostructure levels. Here, we report several cases to be used as proof of concept of the results achieved when studying interface phenomena by combining a myriad of experimental and theoretical tools to overcome the usual limitation regardind atomic detail, size and time scales and systems of complex composition. Real world examples of the combined experimental-theoretical work and new tools, software, is offered to the readers.
Computational study of the solvation of protoporphyrin IX and its Fe2+ complex
Guizado, Teobaldo Cuya; Pita, Samuel Da Rocha; Louro, Sonia R. Wanderley; Pascutti, Pedro Geraldo
Molecular dynamics (MD) simulations of a well known hydrophobic structure, the heme (ferroprotoporphyrin IX) and its precursor in the heme synthesis, protoporphyrin IX (PPIX) are presented. The objective of the present study is to determine the stability of both structures in an aqueous medium, as well as the structure-solvent relation, hydration shells, and discuss their implications for biological processes. The density functional theory (DFT) is used for the electronic and structural characterization of both PPIX and its Fe2+ complex. A classical approach based on the Gromacs package is used for the MD. The radial distribution function g(r) is used to examine the allocation of water molecules around different regions of the porphyrins. The calculations demonstrate the heterogeneous character of the porphyrins with respect to the affinity with water molecules, the general hydrophobic character of the porphyrin ring bonded or not to the ion Fe, the hydrophilic character of the carboxylic oxygen that is unchanged upon iron binding, and the low hydrophilicity of Fe2+ in the heme.
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
Estimation of staff doses in complex radiological examinations using a Monte Carlo computer code
Vanhavere, F.
2007-01-01
The protection of medical personnel in interventional radiology is an important issue of radiological protection. The irradiation of the worker is largely non-uniform, and a large part of his body is shielded by a lead apron. The estimation of effective dose (E) under these conditions is difficult and several approaches are used to estimate effective dose involving such a protective apron. This study presents a summary from an extensive series of simulations to determine scatter-dose distribution around the patient and staff effective dose from personal dosimeter readings. The influence of different parameters (like beam energy and size, patient size, irradiated region, worker position and orientation) on the staff doses has been determined. Published algorithms that combine readings of an unshielded and a shielded dosimeter to estimate effective dose have been applied and a new algorithm, that gives more accurate dose estimates for a wide range of situations was proposed. A computational approach was used to determine the dose distribution in the worker's body. The radiation transport and energy deposition was simulated using the MCNP4B code. The human bodies of the patient and radiologist were generated with the Body Builder anthropomorphic model-generating tool. The radiologist is protected with a lead apron (0.5 mm lead equivalent in the front and 0.25 mm lead equivalent in the back and sides) and a thyroid collar (0.35 mm lead equivalent). The lower-arms of the worker were folded to simulate the arms position during clinical examinations. This realistic situation of the folded arms affects the effective dose to the worker. Depending on the worker position and orientation (and of course the beam energy), the difference can go up to 25 percent. A total of 12 Hp(10) dosimeters were positioned above and under the lead apron at the neck, chest and waist levels. Extra dosimeters for the skin dose were positioned at the forehead, the forearms and the front surface of
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
Cazas-Duran, Eymi Valery; Fischer Rubira-Bullen, Izabel Regina; Pagin, Otávio; Stuchi Centurion-Pagin, Bruna
Tonsilloliths and abnormal stylohyoid complex may have similar symptoms to others of different aetiology. Individuals with cleft lip and palate describe similar symptoms because of the anatomical implications that are peculiar to this anomaly. The aim of this study was to determine the prevalence of abnormal stylohyoid complex and tonsilloliths on cone beam computed tomography in individuals with cleft lip and palate. According to the inclusion and exclusion criteria, 66 CT scans out of of 2,794 were analysed, on i- Cat ® vision software with 0.8 index Kappa intra-examiner. The total prevalence of ossification of the incomplete stylohyoid complex in individuals with cleft lip and palate was 66.6%; the prevalence of these findings in females was 75% and 61.9% in males. The total prevalence of tonsilloliths was 7.5%. It is important to ascertain calcification of the stylohyoid complex and tonsilloliths in the radiological report, due to the anatomical proximity and similarsymptomatology to other orofacial impairments inindividuals with cleft lip and palate, focusing on females with oral cleft formation, patients with incisive trans foramen cleft and incisive post foramen cleft because they are more prevalent. Greater knowledge of the anatomical morphometry of individuals with cleft lip and palate greatly contributes towards the selection of clinical behaviours and the quality of life of these patients, since cleft lip and palateis one of the most common anomalies. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.
2012-10-07
Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied
A computer program to reduce the time of diagnosis in complex systems
Arellano-Gomez, Juan; Romero-Rubio, Omar U.
2006-01-01
In Nuclear Power Plants (NPPs), the time that some systems are allowed to be down is frequently very limited. Thus, when one of these systems fails, diagnosis and repair must be quickly performed in order to get the system back to an operative state. Unfortunately, in complex systems, a considerable amount of the total repair time could be spent in the duty of diagnosis. For this reason, it seems very useful to provide maintenance personnel with a systematic approach to system failure diagnosis, capable to minimize the time required to effectively identify the causes of system malfunction. In this context, the expert systems technology has been widely applied in several disciplines to successfully develop diagnostic systems. Obviously, an important input to develop these expert systems is, of course, knowledge; this knowledge includes both formal knowledge and experience on what faults could occur, how these faults occur, which are the effects of these faults, what could be inferred from symptoms, etc. Due to their logical nature, those fault trees developed by expert analysts during risk studies could also be used as the source of knowledge of diagnostic expert systems (DES); however, these fault trees must be expanded to include symptoms because, typically, diagnosis is performed by inferring the causes of system malfunction from symptoms. This paper presents SANA (Symptom Analyzer), a new software package specially designed to develop diagnostic expert systems. The main feature of this software is that it exploits the knowledge stored in fault trees (in particular, expanded fault trees) to generate very efficient diagnostic strategies. These strategies guide diagnostic activities seeking to minimize the time required to identify those components which are responsible of the system failure. Besides, the generated strategies 'emulate' the way experienced technicians proceed in order to diagnose the causes of system failure (i.e. by recognizing categories of
Computing the complex : Dusty plasmas in the presence of magnetic fields and UV radiation
Land, V.
2007-12-01
About 90% of the visible universe is plasma. Interstellar clouds, stellar cores and atmospheres, the Solar wind, the Earth's ionosphere, polar lights, and lightning are all plasma; ionized gases, consisting of electrons, ions, and neutrals. Not only many industries, like the microchip and solar cell industry, but also future fusion power stations, rely heavily on the use of plasma. More and more, home appliances include plasma technologies, like compact fluorescent light sources, and plasma screens. Dust particles, which can disrupt plasma processes, enter these plasmas, through chemical reactions in the plasma, or through interactions between plasma and walls. For instance, during microchip fabrication, dust particles can destroy the tiny, nanometre-sized structures on the surface of these chips. On the other hand, dust particles orbiting Young Stellar Objects coagulate and form the seeds of planets. In order to understand fundamental processes, such as planet formation, or to optimize industrial plasma processes, a thorough description of dusty plasma is necessary. Dust particles immersed in plasma collect ions and electrons from the plasma and charge up electrically. Therefore, the presence of dust changes plasma, while at the same time many forces start acting on the dust. Therefore, the dust and plasma become coupled, making dusty plasma a very complex medium to describe, in which many length and time scales play a role, from the Debye length to the length of the electrodes, and from the inverse plasma frequencies to the dust transport times. Using a self-consistent fluid model, we simulate these multi-scale dusty plasmas in radio frequency discharges under micro-gravity. We show that moderate non-linear scattering of ions by the dust particles is the most important aspect in the calculation of the ion drag force. This force is also responsible for the formation of a dust-free 'void' in dusty plasma under micro-gravity, caused by ions moving from the centre of
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
Faust, Oliver; Yu, Wenwei; Rajendra Acharya, U
2015-03-01
The concept of real-time is very important, as it deals with the realizability of computer based health care systems. In this paper we review biomedical real-time systems with a meta-analysis on computational complexity (CC), delay (Δ) and speedup (Sp). During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim of achieving real-time, with a discussion on CC or Sp. These papers were analyzed in terms of processing system used, application area (AA), CC, Δ, Sp, implementation/algorithm (I/A) and competition. The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on implementation (I). Most authors compete on big O notation (O) and processing time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gerber, Paul R.; Mark, Alan E.; van Gunsteren, Wilfred F.
1993-06-01
Derivatives of free energy differences have been calculated by molecular dynamics techniques. The systems under study were ternary complexes of Trimethoprim (TMP) with dihydrofolate reductases of E. coli and chicken liver, containing the cofactor NADPH. Derivatives are taken with respect to modification of TMP, with emphasis on altering the 3-, 4- and 5-substituents of the phenyl ring. A linear approximation allows the encompassing of a whole set of modifications in a single simulation, as opposed to a full perturbation calculation, which requires a separate simulation for each modification. In the case considered here, the proposed technique requires a factor of 1000 less computing effort than a full free energy perturbation calculation. For the linear approximation to yield a significant result, one has to find ways of choosing the perturbation evolution, such that the initial trend mirrors the full calculation. The generation of new atoms requires a careful treatment of the singular terms in the non-bonded interaction. The result can be represented by maps of the changed molecule, which indicate whether complex formation is favoured under movement of partial charges and change in atom polarizabilities. Comparison with experimental measurements of inhibition constants reveals fair agreement in the range of values covered. However, detailed comparison fails to show a significant correlation. Possible reasons for the most pronounced deviations are given.
Takeda, Tsuneo
1977-11-01
This report is prepared as a user's input manual for a computer code CODAC-No.5 and provides a general description of the code and instructions for its use. The code represents a modified version of the CODAC-No.4 code. The code developed is capable of calculating radioactive nuclide yields in an any given complex decay and activation chain independent of irradiation history. In this code, eighteen kinds of valuable tables and graphs can be prepared for output. They are available for selection of optimum irradiation and cooling conditions and for other intentions in accordance with irradiation and cooling. For a example, the ratio of a nuclide yield to total nuclide yield depending on irradiation and cooling times is obtained. In these outputs, several kinds of complex and intricate equations and others are included. This code has almost the same input forms as that of CODAC-No.4 code excepting input of irradiation history data. Input method and formats used for this code are very simple for any kinds of nuclear data. List of FORTRAN statements, examples of input data and output results and list of input parameters and its definitions are given in this report. (auth.)
Phillips, Jordan J; Peralta, Juan E
2013-05-07
We present a method for calculating magnetic coupling parameters from a single spin-configuration via analytic derivatives of the electronic energy with respect to the local spin direction. This method does not introduce new approximations beyond those found in the Heisenberg-Dirac Hamiltonian and a standard Kohn-Sham Density Functional Theory calculation, and in the limit of an ideal Heisenberg system it reproduces the coupling as determined from spin-projected energy-differences. Our method employs a generalized perturbative approach to constrained density functional theory, where exact expressions for the energy to second order in the constraints are obtained by analytic derivatives from coupled-perturbed theory. When the relative angle between magnetization vectors of metal atoms enters as a constraint, this allows us to calculate all the magnetic exchange couplings of a system from derivatives with respect to local spin directions from the high-spin configuration. Because of the favorable computational scaling of our method with respect to the number of spin-centers, as compared to the broken-symmetry energy-differences approach, this opens the possibility for the blackbox exploration of magnetic properties in large polynuclear transition-metal complexes. In this work we outline the motivation, theory, and implementation of this method, and present results for several model systems and transition-metal complexes with a variety of density functional approximations and Hartree-Fock.
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
Moritomo, Hisao; Arimitsu, Sayuri; Kubo, Nobuyuki; Masatomi, Takashi; Yukioka, Masao
2015-02-01
To classify triangular fibrocartilage complex (TFCC) foveal lesions on the basis of computed tomography (CT) arthrography using a radial plane view and to correlate the CT arthrography results with surgical findings. We also tested the interobserver and intra-observer reliability of the radial plane view. A total of 33 patients with a suspected TFCC foveal tear who had undergone wrist CT arthrography and subsequent surgical exploration were enrolled. We classified the configurations of TFCC foveal lesions into 5 types on the basis of CT arthrography with the radial plane view in which the image slices rotate clockwise centered on the ulnar styloid process. Sensitivity, specificity, and positive predictive values were calculated for each type of foveal lesion in CT arthrography to detect foveal tears. We determined interobserver and intra-observer agreements using kappa statistics. We also compared accuracies with the radial plane views with those with the coronal plane views. Among the tear types on CT arthrography, type 3, a roundish defect at the fovea, and type 4, a large defect at the overall ulnar insertion, had high specificity and positive predictive value for the detection of foveal tears. Specificity and positive predictive values were 90% and 89% for type 3 and 100% and 100% for type 4, respectively, whereas sensitivity was 35% for type 3 and 22% for type 4. Interobserver and intra-observer agreement was substantial and almost perfect, respectively. The radial plane view identified foveal lesion of each palmar and dorsal radioulnar ligament separately, but accuracy results with the radial plane views were not statistically different from those with the coronal plane views. Computed tomography arthrography with a radial plane view exhibited enhanced specificity and positive predictive value when a type 3 or 4 lesion was identified in the detection of a TFCC foveal tear compared with historical controls. Diagnostic II. Copyright © 2015 American Society for
Ren, Wenshan; Lukens, Wayne W.; Zi, Guofu; Maron, Laurent; Walter, Marc D.
2012-01-12
Bipyridyl thorium metallocenes [5-1,2,4-(Me3C)3C5H2]2Th(bipy) (1) and [5-1,3-(Me3C)2C5H3]2Th(bipy) (2) have been investigated by magnetic susceptibility and computational studies. The magnetic susceptibility data reveal that 1 and 2 are not diamagnetic, but they behave as temperature independent paramagnets (TIPs). To rationalize this observation, density functional theory (DFT) and complete active space SCF (CASSCF) calculations have been undertaken, which indicated that Cp2Th(bipy) has indeed a Th(IV)(bipy2-) ground state (f0d0 2, S = 0), but the open-shell singlet (f0d1 1, S = 0) (almost degenerate with its triplet congener) is lying only 9.2 kcal/mol higher in energy. Complexes 1 and 2 react cleanly with Ph2CS to give [ 5-1,2,4-(Me3C)3C5H2]2Th[(bipy)(SCPh2)] (3) and [ 5-1,3-(Me3C)2C5H3]2Th[(bipy)(SCPh2)] (4), respectively, in quantitative conversions. Since no intermediates were observed experimentally, this reaction was also studied computationally. Coordination of Ph2CS to 2 in its S = 0 ground state is not possible, but Ph2CS can coordinate to 2 in its triplet state (S = 1) upon which a single electron transfer (SET) from the (bipy2-) fragment to Ph2CS followed by C-C coupling takes place.
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Samreen Laghari
Full Text Available Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT implies an inherent difficulty in modeling problems.It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS. The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC framework to model a Complex communication network problem.We use Exploratory Agent-based Modeling (EABM, as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
Berezin, F.N.; Kisurin, V.A.; Nemets, O.F.; Ofengenden, R.G.; Pugach, V.M.; Pavlenko, Yu.N.; Patlan', Yu.V.; Savrasov, S.S.
1981-01-01
Experimental technique for investigation of three-particle nuclear reactions in kinematically total experiments is described. The technique provides the storage of one-dimensional and two- dimensional energy spectra from several detectors. A block diagram of the measuring system, using this technique, is presented. The measuring system consists of analog equipment for rapid-slow coincidences and of a two-processor complex on the base of the M-400 computer with a general bus. Application of a two-processor complex, each computer of which has a possibility of direct access to memory of another computer, permits to separate functions of data collection and data operational presentation and to perform necessary physical calculations. Software of the measuring complex which includes programs written using the ASSEMBLER language for the first computer and functional programs written using the BASIC language for the second computer, is considered. Software of the first computer includes the DISPETCHER dialog control program, driver package for control of external devices, of applied program package and system modules. The technique, described, is tested in experiment on investigation of d+ 10 B→α+α+α three- particle reaction at deutron energy of 13.6 MeV. The two-dimensional energy spectrum reaction obtained with the help of the technique described is presented [ru
Stepanov, V.I.; Koryagin, A.V.; Ruzankov, V.N.
1988-01-01
Computer-aided design system for a complex of problems concerning calculation and analysis of engineering and economical indices of NPP power units is described. In the system there are means for automated preparation and debugging of data base software complex, which realizes th plotted algorithm in the power unit control system. Besides, in the system there are devices for automated preparation and registration of technical documentation
Rocchigiani, Luca; Fernandez-Cestau, Julio; Budzelaar, Peter H M; Bochmann, Manfred
2018-06-21
The factors affecting the rates of reductive C-C cross-coupling reactions in gold(III) aryls were studied by using complexes that allow easy access to a series of electronically modified aryl ligands, as well as to gold methyl and vinyl complexes, by using the pincer compounds [(C^N^C)AuR] (R=C 6 F 5 , CH=CMe 2 , Me and p-C 6 H 4 X, where X=OMe, F, H, tBu, Cl, CF 3 , or NO 2 ) as starting materials (C^N^C=2,6-(4'-tBuC 6 H 3 ) 2 pyridine dianion). Protodeauration followed by addition of one equivalent SMe 2 leads to the quantitative generation of the thioether complexes [(C^N-CH)AuR(SMe 2 )] + . Upon addition of a second SMe 2 pyridine is displaced, which triggers the reductive aryl-R elimination. The rates for these cross-couplings increase in the sequence k(vinyl)>k(aryl)≫k(C 6 F 5 )>k(Me). Vinyl-aryl coupling is particularly fast, 1.15×10 -3 L mol -1 s -1 at 221 K, whereas both C 6 F 5 and Me couplings encountered higher barriers for the C-C bond forming step. The use of P(p-tol) 3 in place of SMe 2 greatly accelerates the C-C couplings. Computational modelling shows that in the C^N-bonded compounds displacement of N by a donor L is required before the aryl ligands can adopt a conformation suitable for C-C bond formation, so that elimination takes place from a four-coordinate intermediate. The C-C bond formation is the rate-limiting step. In the non-chelating case, reductive C(sp 2 )-C(sp 2 ) elimination from three-coordinate ions [(Ar 1 )(Ar 2 )AuL] + is almost barrier-free, particularly if L=phosphine. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Orenha, Renato Pereira; Santiago, Régis Tadeu; Haiduke, Roberto Luiz Andrade; Galembeck, Sérgio Emanuel
2017-05-05
Two treatments of relativistic effects, namely effective core potentials (ECP) and all-electron scalar relativistic effects (DKH2), are used to obtain geometries and chemical reaction energies for a series of ruthenium complexes in B3LYP/def2-TZVP calculations. Specifically, the reaction energies of reduction (A-F), isomerization (G-I), and Cl - negative trans influence in relation to NH 3 (J-L) are considered. The ECP and DKH2 approaches provided geometric parameters close to experimental data and the same ordering for energy changes of reactions A-L. From geometries optimized with ECP, the electronic energies are also determined by means of the same ECP and basis set combined with the computational methods: MP2, M06, BP86, and its derivatives, so as B2PLYP, LC-wPBE, and CCSD(T) (reference method). For reactions A-I, B2PLYP provides the best agreement with CCSD(T) results. Additionally, B3LYP gave the smallest error for the energies of reactions J-L. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.
2013-05-01
In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be
Takahashi, Koji; Sasaki, Tomoaki; Nabaa, Basim; Aburano, Tamio (Department of Radiology, Asahikawa Medical University and Hospital, Asahikawa (Japan)), Email: taka1019@asahikawa-med.ac.jp; Beek, Edwin Jr. von (Clinical Research Imaging Centre, Queen' s Medical Research Institute, University of Edinburgh, Edinburgh (United Kingdom)); Stanford, William (Department of Radiology, University of Iowa College of Medicine, Iowa City (United States))
2012-03-15
Background. In the primary infection of pulmonary histoplasmosis, pulmonary lesions are commonly solitary and associated with hilar and/or mediastinal nodal diseases, which spontaneously resolve, resulting in calcifications in individuals with normal cellular immunity. Purpose. To assess the lymphatic drainage to the mediastinum from each pulmonary segment and lobe using computed tomographic (CT) observations of a calcified primary complex pulmonary histoplasmosis and predict which patients with N2 disease that would benefit from surgery. Material and Methods. We collected 585 CT studies of patients with primary complex histoplasmosis consisting of solitary calcified pulmonary lesions and calcified hilar and/or mediastinal nodal disease. Using the N stage criteria of non-small cell lung cancer, we assessed the distribution of the involved hilar and mediastinal nodes depending on the pulmonary segment of the lesion, with a focus on skip involvement. We also assessed the correlation between the incidence of N1and skip N2 involvement and the mean number of involved mediastinal nodal stations in the non-skip N2 and skip N2 groups. Results. Skip involvement was common in the apical segment (9/45, 20.0%), posterior segment (7/31, 22.6%), and mediolbasal segment (13/20, 65.0%) in the right lung, and in the apicoposterior segment (7/55, 12.7%), lateral basal segment (6/26, 23.1%), and posterobasal segment (16/47, 34.0%) in the left lung. The incidence of skip involvement in each segment showed a significant inverse correlation with that of N1 involvement (r = -0.51, P <0.05) in both lungs. The mean number of involved mediastinal nodal stations in the non-skip N2 and skip N2 groups in all segments of both lungs were 1.4 (434/301) and 1.2 (93/77), and the former was significantly greater than the latter (P <0.01). Conclusion. Our data showed a predictable pattern of segmental and lobar lymphatic drainage to the mediastinum and suggested that skip involvement could represent
Takahashi, Koji; Sasaki, Tomoaki; Nabaa, Basim; Aburano, Tamio; Beek, Edwin Jr. von; Stanford, William
2012-01-01
Background. In the primary infection of pulmonary histoplasmosis, pulmonary lesions are commonly solitary and associated with hilar and/or mediastinal nodal diseases, which spontaneously resolve, resulting in calcifications in individuals with normal cellular immunity. Purpose. To assess the lymphatic drainage to the mediastinum from each pulmonary segment and lobe using computed tomographic (CT) observations of a calcified primary complex pulmonary histoplasmosis and predict which patients with N2 disease that would benefit from surgery. Material and Methods. We collected 585 CT studies of patients with primary complex histoplasmosis consisting of solitary calcified pulmonary lesions and calcified hilar and/or mediastinal nodal disease. Using the N stage criteria of non-small cell lung cancer, we assessed the distribution of the involved hilar and mediastinal nodes depending on the pulmonary segment of the lesion, with a focus on skip involvement. We also assessed the correlation between the incidence of N1and skip N2 involvement and the mean number of involved mediastinal nodal stations in the non-skip N2 and skip N2 groups. Results. Skip involvement was common in the apical segment (9/45, 20.0%), posterior segment (7/31, 22.6%), and mediolbasal segment (13/20, 65.0%) in the right lung, and in the apicoposterior segment (7/55, 12.7%), lateral basal segment (6/26, 23.1%), and posterobasal segment (16/47, 34.0%) in the left lung. The incidence of skip involvement in each segment showed a significant inverse correlation with that of N1 involvement (r = -0.51, P <0.05) in both lungs. The mean number of involved mediastinal nodal stations in the non-skip N2 and skip N2 groups in all segments of both lungs were 1.4 (434/301) and 1.2 (93/77), and the former was significantly greater than the latter (P <0.01). Conclusion. Our data showed a predictable pattern of segmental and lobar lymphatic drainage to the mediastinum and suggested that skip involvement could represent
Woods, Damien; Naughton, Thomas J.
2008-01-01
We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...
Megaritou, A; Bartzis, J G
1987-09-01
In the present report the micro computer version of the code is described. More emphasis is given in the new features of the code (i.e. input data structure). A set of instructions for running in an IBM-AT2 computer with the Microsoft FORTRAN V.4.0 is also included together with a sample problem refering to the Greek Research Reactor.
Kevrekidis, Ioannis G. [Princeton Univ., NJ (United States)
2017-02-01
The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.
Gordon, S.; Mcbride, B. J.
1976-01-01
A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.
Kaoru Sakasai
2017-08-01
Full Text Available Neutron devices such as neutron detectors, optical devices including supermirror devices and 3He neutron spin filters, and choppers are successfully developed and installed at the Materials Life Science Facility (MLF of the Japan Proton Accelerator Research Complex (J-PARC, Tokai, Japan. Four software components of MLF computational environment, instrument control, data acquisition, data analysis, and a database, have been developed and equipped at MLF. MLF also provides a wide variety of sample environment options including high and low temperatures, high magnetic fields, and high pressures. This paper describes the current status of neutron devices, computational and sample environments at MLF.
Fukuda, Hironobu; Imataka, George; Drago, Fabrizio; Maeda, Kosaku; Yoshihara, Shigemi
2017-01-01
We report a case of a 10-year-old female patient who survived ring-sling complex without surgery. The patient had congenital wheezing from the neonatal period and was treated after a tentative diagnosis of infantile asthma. The patient suffered from allergy and was hospitalized several times due to severe wheezing, and when she was 22 months old, she was diagnosed with ring-sling complex. We used a segmental 4 mm internal diameter of the trachea for 3-dimensional computed tomography (3D-CT). ...
Raikov Alexander
2018-01-01
Full Text Available The authors analyses the problems of using artificial intelligence to manage complex techno-organizational systems on the basis of the convergence of the telematic, computing and information services in order to manage complex techno-organizational systems in the aerospace industry. This means getting the space objects a higher level of management based on the self-organizing integration principle. Using the artificial intelligence elements allows us to get more optimal and limit values parameters of the ordinal and critical situations in real time. Thus, it helps us to come closer to the limit values parameters of the managed objects due to rising managing and observant possibilities.
Rouf, Syed Awais; Jakobsen, Vibe Boel; Mareš, Jiří
2017-01-01
Recent advances in computational methodology allowed for first-principles calculations of the nuclear shielding tensor for a series of paramagnetic nickel(II) acetylacetonate complexes, [Ni(acac)2L2] with L = H2O, D2O, NH3, ND3, and PMe2Ph have provided detailed insight into the origin of the par......Recent advances in computational methodology allowed for first-principles calculations of the nuclear shielding tensor for a series of paramagnetic nickel(II) acetylacetonate complexes, [Ni(acac)2L2] with L = H2O, D2O, NH3, ND3, and PMe2Ph have provided detailed insight into the origin...
Dalton, G.R.; Tulenko, J.S.; Zhou, X.
1990-01-01
The University of Florida is part of a multiuniversity research effort, sponsored by the US Department of Energy which is under way to develop and deploy an advanced semi-autonomous robotic system for use in nuclear power stations. This paper reports on the development of the computer tools necessary to gain convenient graphic access to the intelligence implicit in a large complex data base such as that in a nuclear reactor plant. This program is integrated as a man/machine interface within the larger context of the total computerized robotic planning and control system. The portion of the project described here addresses the connection between the three-dimensional displays on an interactive graphic workstation and a data-base computer running a large data-base server program. Programming the two computers to work together to accept graphic queries and return answers on the graphic workstation is a key part of the interactive capability developed
Rajasekhar, Bathula; Bodavarapu, Navya; Sridevi, M.; Thamizhselvi, G.; RizhaNazar, K.; Padmanaban, R.; Swu, Toka
2018-03-01
The present study reports the synthesis and evaluation of nonlinear optical property and G-Quadruplex DNA Stabilization of five novel copper(II) mixed ligand complexes. They were synthesized from copper(II) salt, 2,5- and 2,3- pyridinedicarboxylic acid, diethylenetriamine and amide based ligand (AL). The crystal structure of these complexes were determined through X-ray diffraction and supported by ESI-MAS, NMR, UV-Vis and FT-IR spectroscopic methods. Their nonlinear optical property was studied using Gaussian09 computer program. For structural optimization and nonlinear optical property, density functional theory (DFT) based B3LYP method was used with LANL2DZ basis set for metal ion and 6-31G∗ for C,H,N,O and Cl atoms. The present work reveals that pre-polarized Complex-2 showed higher β value (29.59 × 10-30e.s.u) as compared to that of neutral complex-1 (β = 0.276 × 10-30e.s.u.) which may be due to greater advantage of polarizability. Complex-2 is expected to be a potential material for optoelectronic and photonic technologies. Docking studies using AutodockVina revealed that complex-2 has higher binding energy for both G-Quadruplex DNA (-8.7 kcal/mol) and duplex DNA (-10.1 kcal/mol). It was also observed that structure plays an important role in binding efficiency.
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.
2017-01-01
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Farkas, Árpád; Balásházy, Imre
2015-04-01
A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker
2017-01-01
In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.
Klopfer, Eric; Yoon, Susan; Perry, Judy
2005-01-01
This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of…
Bellard, Breshanica
2018-01-01
Professionals responsible for the delivery of education and training using technology systems and platforms can facilitate complex learning through application of relevant strategies, principles and theories that support how learners learn and that support how curriculum should be designed in a technology based learning environment. Technological…
Klopfer, Eric; Yoon, Susan; Perry, Judy
2005-09-01
This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.
O' Kula, K. R. [Savannah River Site (SRS), Aiken, SC (United States); East, J. M. [Savannah River Site (SRS), Aiken, SC (United States); Weber, A. H. [Savannah River Site (SRS), Aiken, SC (United States); Savino, A. V. [Savannah River Site (SRS), Aiken, SC (United States); Mazzola, C. A. [Savannah River Site (SRS), Aiken, SC (United States)
2003-01-01
The evaluation of atmospheric dispersion/ radiological dose analysis codes included fifteen models identified in authorization basis safety analysis at DOE facilities, or from regulatory and research agencies where past or current work warranted inclusion of a computer model. All computer codes examined were reviewed using general and specific evaluation criteria developed by the Working Group. The criteria were based on DOE Orders and other regulatory standards and guidance for performing bounding and conservative dose calculations. Included were three categories of criteria: (1) Software Quality/User Interface; (2) Technical Model Adequacy; and (3) Application/Source Term Environment. A consensus-based limited quantitative ranking process was used to base an order of model preference as both an overall conclusion, and under specific conditions.
Synthesis, crystal structure and computational chemistry research of a Zinc(II complex: [Zn(Pt(Biim2
Teng Fei
2012-01-01
Full Text Available The title metal-organic coordination complex [Zn(pt(Biim2] (pt=phthalic acid, benzene-1,2-dicarboxylate, Biim=2,2'-biimidazole 1 has been obtained by using hydrothermal synthesis and characterized by single-crystal X-ray diffraction. The complex crystallizes in monoclinic, space group P21/n with a = 8.5466(15 Å, b = 11.760(2 Å, c = 20.829(4 Å, β = 95.56(2º, V = 2083.5(6 Å3, Mr =497.78, Dc = 1.587 g/cm3, μ(MoKα = 1.226 mm−1, F(000 = 1016, Z = 4, the final R = 0.0564 and wR = 0.1851 for 3656 observed reflections (I > 2σ(I. The elemental analysis, IR, TG and the theoretical calculation were also investigated.
Li, Xiang; Ko, Yeon-Jae; Wang, Haopeng; Bowen, Kit H.; Guevara-García, Alfredo; Martínez, Ana
2011-02-01
The copper-nucleoside anions, Cu-(cytidine) and Cu-(uridine), have been generated in the gas phase and studied by both experimental (anion photoelectron spectroscopy) and theoretical (density functional calculations) methods. The photoelectron spectra of both systems are dominated by single, intense, and relatively narrow peaks. These peaks are centered at 2.63 and 2.71 eV for Cu-(cytidine) and Cu-(uridine), respectively. According to our calculations, Cu-(cytidine) and Cu-(uridine) species with these peak center [vertical detachment energy (VDE)] values correspond to structures in which copper atomic anions are bound to the sugar portions of their corresponding nucleosides largely through electrostatic interactions; the observed species are anion-molecule complexes. The combination of experiment and theory also reveal the presence of a slightly higher energy, anion-molecule complex isomer in the case of the Cu-(cytidine). Furthermore, our calculations found that chemically bond isomers of these species are much more stable than their anion-molecule complex counterparts, but since their calculated VDE values are larger than the photon energy used in these experiments, they were not observed.
Li, Xiang; Ko, Yeon-Jae; Wang, Haopeng; Bowen, Kit H; Guevara-García, Alfredo; Martínez, Ana
2011-02-07
The copper-nucleoside anions, Cu(-)(cytidine) and Cu(-)(uridine), have been generated in the gas phase and studied by both experimental (anion photoelectron spectroscopy) and theoretical (density functional calculations) methods. The photoelectron spectra of both systems are dominated by single, intense, and relatively narrow peaks. These peaks are centered at 2.63 and 2.71 eV for Cu(-)(cytidine) and Cu(-)(uridine), respectively. According to our calculations, Cu(-)(cytidine) and Cu(-)(uridine) species with these peak center [vertical detachment energy (VDE)] values correspond to structures in which copper atomic anions are bound to the sugar portions of their corresponding nucleosides largely through electrostatic interactions; the observed species are anion-molecule complexes. The combination of experiment and theory also reveal the presence of a slightly higher energy, anion-molecule complex isomer in the case of the Cu(-)(cytidine). Furthermore, our calculations found that chemically bond isomers of these species are much more stable than their anion-molecule complex counterparts, but since their calculated VDE values are larger than the photon energy used in these experiments, they were not observed.
Hagras, Muhammad Ahmed
Electron transfer occurs in many biological systems which are imperative to sustain life; oxidative phosphorylation in prokaryotes and eukaryotes, and photophosphorylation in photosynthetic and plant cells are well-balanced and complementary processes. Investigating electron transfer in those natural systems provides detailed knowledge of the atomistic events that lead eventually to production of ATP, or harvesting light energy. Ubiquinol:cytochrome c oxidoreductase complex (also known as bc 1 complex, or respiratory complex III) is a middle player in the electron transport proton pumping orchestra, located in the inner-mitochondrial membrane in eukaryotes or plasma membrane in prokaryotes, which converts the free energy of redox reactions to electrochemical proton gradient across the membrane, following the fundamental chemiosmotic principle discovered by Peter Mitchell 1. In humans, the malfunctioned bc1 complex plays a major role in many neurodegenerative diseases, stress-induced aging, and cancer development, because it produces most of the reactive oxygen species, which are also involved in cellular signaling 2. The mitochondrial bc1 complex has an intertwined dimeric structure comprised of 11 subunits in each monomer, but only three of them have catalytic function, and those are the only domains found in bacterial bc1 complex. The core subunits include: Rieske domain, which incorporates iron-sulfur cluster [2Fe-2S]; trans-membrane cytochrome b domain, incorporating low-potential heme group (heme b L) and high-potential heme group (heme b H); and cytochrome c1 domain, containing heme c1 group and two separate binding sites, Qo (or QP) site where the hydrophobic electron carrier ubihydroquinol QH2 is oxidized, and Qi (or QN) site where ubiquinone molecule Q is reduced 3. Electrons and protons in the bc1 complex flow according to the proton-motive Q-cycle proposed by Mitchell, which includes a unique electron flow bifurcation at the Qo site. At this site, one
Pinak, Miroslav
2000-02-01
An analysis of the distribution of water around DNA surface focusing on the role of the distribution of water molecules in the proper recognition of damaged site by repair enzyme T4 Endonuclease V was performed. The native DNA dodecamer, dodecamer with the thymine dimer (TD) and complex of DNA and part of repair enzyme T4 Endonuclease V were examined throughout the 500 ps of molecular dynamics simulation. During simulation the number of water molecules close to the DNA atoms and the residence time were calculated. There is an increase in number of water molecules lying in the close vicinity to TD if compared with those lying close to two native thymines (TT). Densely populated area with water molecules around TD is one of the factors detected by enzyme during scanning process. The residence time was found higher for molecule of the complex and the six water molecules were found occupying the stabile positions between the TD and catalytic center close to atoms P, C3' and N3. These molecules originate water mediated hydrogen bond network that contribute to the stability of complex required for the onset of repair process. (author)
Kaluzhniy, E.; Muhamedgaliev, A.; Bimagambetov, T; Usenov, F.
1996-01-01
Laboratory-Researching Base of the Scientific and Research Institute of Radio Electronics includes the complex of radio-electronic facilities, that have either military or civil assignments. Analysis of technical features of the complex and of its potential showed the potentials of modernization of the complex in the interest of solving national economical and applied scientific problems subject to sufficient modernization level. In the present report the modernization of the telemetric station and of the quanta-optical system 'Sazhen' is examined. Modernization of the telemetric station (originally designated to receive and register the telemetric information in the meter and decimeter range of waves) consists of the development of the high-frequency tract of the station up to 8192 MHz (the frequency of the space system G osuro 01 ) and the development of the system of decoding and registration of the data of distance sounding. Modernization of the quanta-optical system (originally designated for highly precise measurement of the space ship orbit elements at the altitude up to 40,000 km) consists of adding the functions of collecting coordinate and the functions of astronomic pass instrument with the photoelectric registration and automatic information searching. Realization of the modernization will make possible to create technical instruments of the system of the operative space monitoring of the Kazakstan, s territory and to get the instrument, materializing the natural sample of the universal time
Pinak, Miroslav [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2000-02-01
An analysis of the distribution of water around DNA surface focusing on the role of the distribution of water molecules in the proper recognition of damaged site by repair enzyme T4 Endonuclease V was performed. The native DNA dodecamer, dodecamer with the thymine dimer (TD) and complex of DNA and part of repair enzyme T4 Endonuclease V were examined throughout the 500 ps of molecular dynamics simulation. During simulation the number of water molecules close to the DNA atoms and the residence time were calculated. There is an increase in number of water molecules lying in the close vicinity to TD if compared with those lying close to two native thymines (TT). Densely populated area with water molecules around TD is one of the factors detected by enzyme during scanning process. The residence time was found higher for molecule of the complex and the six water molecules were found occupying the stabile positions between the TD and catalytic center close to atoms P, C3' and N3. These molecules originate water mediated hydrogen bond network that contribute to the stability of complex required for the onset of repair process. (author)
Nazmutdinov, Renat R.; Zinkicheva, Tamara T.; Vassiliev, Sergey Yu.; Glukhov, Dmitrii V.; Tsirlina, Galina A.; Probst, Michael
2013-01-01
Highlights: ► We investigate Li, Na and K cryolite melts by Raman spectroscopy and dft. ► A slight red shift of main Raman peaks is observed in the row Li + , Na + , K + . ► A decrease of the half-widths of peaks is observed in the same row. ► Fluoroaluminates and their complexation kinetics play an important role. - Abstract: Lithium, sodium and potassium cryolite melts are probed by Raman spectroscopy in a wide range of the melt composition. The experimental data demonstrate a slight red shift of main peaks and a decrease of their half-widths in the row Li + , Na + , K + . Quantum chemical modelling of the systems is performed at the density functional theory level. The ionic environment is found to play a crucial role in the energy of fluoroaluminates. Potential energy surfaces describing the formation/dissociation of certain complex species, as well as model Raman spectra are constructed and compared with those obtained recently for sodium containing cryolite melts (R.R. Nazmutdinov, et al., Spectrochim, Acta A 75 (2010) 1244.). The calculations show that the cation nature affects the geometry of the ionic associates as well as the equilibrium and kinetics of the complexation processes. This enables to interpret both original experimental data and those reported in literature
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
Johannsen, G.; Rouse, W. B.
1978-01-01
A hierarchy of human activities is derived by analyzing automobile driving in general terms. A structural description leads to a block diagram and a time-sharing computer analogy. The range of applicability of existing mathematical models is considered with respect to the hierarchy of human activities in actual complex tasks. Other mathematical tools so far not often applied to man machine systems are also discussed. The mathematical descriptions at least briefly considered here include utility, estimation, control, queueing, and fuzzy set theory as well as artificial intelligence techniques. Some thoughts are given as to how these methods might be integrated and how further work might be pursued.
Constantin, Magdalena; Constantin, Dragos E; Keall, Paul J; Narula, Anisha; Svatos, Michelle; Perl, Joseph
2010-01-01
Most of the treatment head components of medical linear accelerators used in radiation therapy have complex geometrical shapes. They are typically designed using computer-aided design (CAD) applications. In Monte Carlo simulations of radiotherapy beam transport through the treatment head components, the relevant beam-generating and beam-modifying devices are inserted in the simulation toolkit using geometrical approximations of these components. Depending on their complexity, such approximations may introduce errors that can be propagated throughout the simulation. This drawback can be minimized by exporting a more precise geometry of the linac components from CAD and importing it into the Monte Carlo simulation environment. We present a technique that links three-dimensional CAD drawings of the treatment head components to Geant4 Monte Carlo simulations of dose deposition. (note)
Іван Нищак
2015-12-01
Full Text Available The article contains Theoretical Foundations of designing of author’s electronic educational-methodical complex (EEMC «Graphics», intended to implement the engineering-graphic preparation of future teachers of technology in terms of computer-based learning. The process of designing of electronic educational-methodical complex “Graphics” includes the following successive stages: 1 identification of didactic goals and objectives; 2the designing of patterns of EEMC; 3 the selection of contents and systematization of educational material; 4 the program-technical implementation of EEMC; 5 interface design; 6 expert assessment of quality of EEMC; 7 testing of EEMC; 8 adjusting the software; 9 the development of guidelines and instructions for the use of EEMC.
Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric
2015-05-01
Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to better understand teachers' classroom practices as they attempt to navigate myriad variables in the implementation of biology units that include working with computer simulations, and learning about and teaching through complex systems ideas. Sample: Research participants were three high school biology teachers, two females and one male, ranging in teaching experience from six to 16 years. Their teaching contexts also ranged in student achievement from 14-47% advanced science proficiency. Design and methods: We used a holistic multiple case study methodology and collected data during the 2011-2012 school year. Data sources include classroom observations, teacher and student surveys, and interviews. Data analyses and trustworthiness measures were conducted through qualitative mining of data sources and triangulation of findings. Results: We illustrate the characteristics of adaptive expertise of more or less successful teaching and learning when implementing complex systems curricula. We also demonstrate differences between case study teachers in terms of particular variables associated with adaptive expertise. Conclusions: This research contributes to scholarship on practices and professional development needed to better support teachers to teach through a complex systems pedagogical and curricular approach.
1983-09-01
F.P. PX /AMPZIJ/ REFH /AMPZIJ/ REFV /AI4PZIJ/ * RHOX /AI4PZIJ/ RHOY /At4PZIJ/ RHOZ /AI4PZIJ/ S A-ZJ SA /AMPZIJ/ SALP /AMPZIJ/ 6. CALLING ROUTINE: FLDDRV...US3NG ALGORITHM 72 COMPUTE P- YES .~:*:.~~ USING* *. 1. NAME: PLAINT (GTD) ] 2. PURPOSE: To determine if a ray traveling from a given source loca...determine if a source ray reflection from plate MP occurs. If a ray traveling from the source image location in the reflected ray direction passes through
Hyder, M.L.; Farawila, Y.M.; Abdel-Khalik, S.I.; Halvorson, P.J.
1992-05-01
In the development of the Severe Accident Analysis Program for the Savannah River production reactors, it was recognized that certain accidents have the potential for causing damaging steam explosions. The massive SRS reactor buildings are likely to withstand any imaginable steam explosion. However, reactor components and building structures including hatches, ventilation ducts, etc., could be at risk if such an explosion occurred. No tools were available to estimate the effects of such explosions on actual structures. To meet this need, the Savannah River Laboratory contracted with the Georgia Institute of Technology Research Institute for development of a computer-based calculational tool for estimating the effects of steam explosions. The goal for this study was to develop a computer code that could be used parametrically to predict the effects of various steam explosions on their surroundings. This would be able to predict whether a steam explosion of a given magnitude would be likely to fail a particular structure. This would require, of course, that the magnitude of the explosion be specified through some combination of judgment and calculation. The requested code, identified as the K-FIX(GT) code, was developed and delivered by the contractor, along with extensive documentation. The several individual reports that constitute the documentation are each being issued as a separate WSRC report. Documentation includes several model calculations, and representation of these in graphic form. This report gives detailed instructions for the use of the code, including identification of all input parameters required
Morsing, Thorbjørn Juul; Sauer, Stephan P. A.; Weihe, Høgni
2013-01-01
The magnetic susceptibility of the dinuclear chromium(III) complex [(CH3CN)5CrOCr(NCCH3)5](BF4)4 · 2 CH3CN has been measured and analyzed. With a fitted value of the triplet energy J = 650 cm-1, the antiferromagnetic coupling is the strongest hitherto determined for an unsupported linear oxide-br...... relative errors typically of less than 10 % ranging from the strongest coupled systems to systems with moderately strong couplings. A significant influence (>20%) of the chemical nature of the peripheral, non-bridging ligands on the exchange coupling was found and rationalized....
Afroz, Ziya; Zulkarnain,; Ahmad, Afaq; Alam, Mohammad Jane; Faizan, Mohd; Ahmad, Shabbir
2016-01-01
DFT and TD-DFT studies of o-phenylenediamine (PDA), 3,5-dinitrosalicylic acid (DNSA) and their charge transfer complex have been carried out at B3LYP/6-311G(d,p) level of theory. Molecular geometry and various other molecular properties like natural atomic charges, ionization potential, electron affinity, band gap, natural bond orbital (NBO) and frontier molecular analysis have been presented at same level of theory. Frontier molecular orbital and natural bond orbital analysis show the charge delocalization from PDA to DNSA.
Zizin, M.N.; Savochkina, O.A.; Chukhlova, O.P.
1978-01-01
A structure of standard designations is described and semantics of a number of standard values used in a NF-6 program complex is given. Main source data and results of neutron-physical reactor calculation are standard values, the peculiarities of FORTRAN and ALGOL-GDR algorithm languages in the DUBNA monitoring system were taken account of. As a base of standard values list the FIHAR system list, supplemented with new standard designations for integral reactor characteristics, is used. Developed is also a list of standard values to organize the exchange with external memory in the process of task solution and long-range storage
Caizergues, Robert; Poullot, Gilles; Teillet, J.-R.
1976-06-01
The MORET code determines effective multiplying factors. It uses the Monte Carlo technique and the multigroup theory; a collision is taken as isotropic, but anisotropy is taken into account by means of the transport correction. Complex geometries can be rapidly treated: the array to be studied is divided in simple elementary volumes (spheres, cylinders, boxes, cones, half space planes...) to which are applied operators of the theory of sets. Some constant or differential (albedos) reflection coefficients simulate neighboring reflections on the outer volume [fr
Helbing, D.; Bishop, S.; Conte, R.; Lukowicz, P.; McCarthy, J. B.
2012-11-01
We have built particle accelerators to understand the forces that make up our physical world. Yet, we do not understand the principles underlying our strongly connected, techno-socio-economic systems. We have enabled ubiquitous Internet connectivity and instant, global information access. Yet we do not understand how it impacts our behavior and the evolution of society. To fill the knowledge gaps and keep up with the fast pace at which our world is changing, a Knowledge Accelerator must urgently be created. The financial crisis, international wars, global terror, the spreading of diseases and cyber-crime as well as demographic, technological and environmental change demonstrate that humanity is facing serious challenges. These problems cannot be solved within the traditional paradigms. Moving our attention from a component-oriented view of the world to an interaction-oriented view will allow us to understand the complex systems we have created and the emergent collective phenomena characterising them. This paradigm shift will enable new solutions to long-standing problems, very much as the shift from a geocentric to a heliocentric worldview has facilitated modern physics and the ability to launch satellites. The FuturICT flagship project will develop new science and technology to manage our future in a complex, strongly connected world. For this, it will combine the power of information and communication technology (ICT) with knowledge from the social and complexity sciences. ICT will provide the data to boost the social sciences into a new era. Complexity science will shed new light on the emergent phenomena in socially interactive systems, and the social sciences will provide a better understanding of the opportunities and risks of strongly networked systems, in particular future ICT systems. Hence, the envisaged FuturICT flagship will create new methods and instruments to tackle the challenges of the 21st century. FuturICT could indeed become one of the most
Ren, Guomin; Krawetz, Roman
2015-01-01
The data explosion in the last decade is revolutionizing diagnostics research and the healthcare industry, offering both opportunities and challenges. These high-throughput "omics" techniques have generated more scientific data in the last few years than in the entire history of mankind. Here we present a brief summary of how "big data" have influenced early diagnosis of complex diseases. We will also review some of the most commonly used "omics" techniques and their applications in diagnostics. Finally, we will discuss the issues brought by these new techniques when translating laboratory discoveries to clinical practice.
Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.
2018-01-01
Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.
Communication complexity and information complexity
Pankratov, Denis
Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information
S Rajasekaran
2010-01-01
Full Text Available Background: Cervical pedicle screw fixation is challenging due to the small osseous morphometrics and the close proximity of neurovascular elements. Computer navigation has been reported to improve the accuracy of pedicle screw placement. There are very few studies assessing its efficacy in the presence of deformity. Also cervical pedicle screw insertion in children has not been described before. We evaluated the safety and accuracy of Iso-C 3D-navigated pedicle screws in the deformed cervical spine. Materials and Methods: Thirty-three patients including 15 children formed the study group. One hundred and forty-five cervical pedicle screws were inserted using Iso-C 3D-based computer navigation in patients undergoing cervical spine stabilization for craniovertebral junction anomalies, cervico-thoracic deformities and cervical instabilities due to trauma, post-surgery and degenerative disorders. The accuracy and containment of screw placement was assessed from postoperative computerized tomography scans. Results: One hundred and thirty (89.7% screws were well contained inside the pedicles. Nine (6.1% Type A and six (4.2% Type B pedicle breaches were observed. In 136 levels, the screws were inserted in the classical description of pedicle screw application and in nine deformed vertebra, the screws were inserted in a non-classical fashion, taking purchase of the best bone stock. None of them had a critical breach. No patient had any neurovascular complications. Conclusion: Iso-C navigation improves the safety and accuracy of pedicle screw insertion and is not only successful in achieving secure pedicle fixation but also in identifying the best available bone stock for three-column bone fixation in altered anatomy. The advantages conferred by cervical pedicle screws can be extended to the pediatric population also.
Kfir, A; Telishevsky-Strauss, Y; Leitner, A; Metzger, Z
2013-03-01
To investigate the use of 3D plastic models, printed from cone beam computed tomography (CBCT) data, for accurate diagnosis and conservative treatment of a complex case of dens invaginatus. A chronic apical abscess with a draining sinus tract was diagnosed during the treatment planning stage of orthodontic therapy. Radiographic examination revealed a large radiolucent area associated with an invaginated right maxillary central incisor, which was found to contain a vital pulp. The affected tooth was strategic in the dental arch. Conventional periapical radiographs provided only partial information about the invagination and its relationship with the main root canal and with the periapical tissues. A limited-volume CBCT scan of the maxilla did not show evidence of communication between the infected invagination and the pulp in the main root canal, which could explain the pulp vitality. A novel method was adopted to allow for instrumentation, disinfection and filling of the invagination, without compromising the vitality of the pulp in the complex root canal system. The CBCT data were used to produce precise 3D plastic models of the tooth. These models facilitated the treatment planning process and the trial of treatment approaches. This approach allowed the vitality of the pulp to be maintained in the complex root canal space of the main root canal whilst enabling the healing of the periapical tissues. Even when extensive periapical pathosis is associated with a tooth with type III dens invaginatus, pulp sensibility tests should be performed. CBCT is a diagnostic tool that may allow for the management of such teeth with complex anatomy. 3D printed plastic models may be a valuable aid in the process of assessing and planning effective treatment modalities and practicing them ex vivo before actually performing the clinical procedure. Unconventional technological approaches may be required for detailed treatment planning of complex cases of dens invaginatus. © 2012
De Simone, Bruna C; Mazzone, Gloria; Russo, Nino; Sicilia, Emilia; Toscano, Marirosa
2018-03-15
How the tetraphenylporphyrin (TPP) and its zinc(II) complexes (ZnTPP) photophysical properties (absorption energies, singlet-triplet energy gap and spin-orbit coupling contributions) can change due to the presence of an increasing number of heavy atoms in their molecular structures has been investigated by means of density functional theory and its time-dependent formulation. Results show that the increase of the atomic mass of the substituted halogen strongly enhances the spin-orbit coupling values, allowing a more efficient singlet-triplet intersystem crossing. Different deactivation channels have been considered and rationalized on the basis of El-Sayed and Kasha rules. Most of the studied compounds possess the appropriate properties to generate cytotoxic singlet molecular oxygen ( 1 Δ g ) and, consequently, they can be proposed as photosensitizers in photodynamic therapy.
Li Di; Wang Geng; Chen Yang; Li Lin; Shrivastav, Gaurav; Oak, Stimit; Tasch, Al; Banerjee, Sanjay; Obradovic, Borna
2001-01-01
A physically-based three-dimensional Monte Carlo simulator has been developed within UT-MARLOWE, which is capable of simulating ion implantation into multi-material systems and arbitrary topography. Introducing the third dimension can result in a severe CPU time penalty. In order to minimize this penalty, a three-dimensional trajectory replication algorithm has been developed, implemented and verified. More than two orders of magnitude savings of CPU time have been observed. An unbalanced Octree structure was used to decompose three-dimensional structures. It effectively simplifies the structure, offers a good balance between modeling accuracy and computational efficiency, and allows arbitrary precision of mapping the Octree onto desired structure. Using the well-established and validated physical models in UT-MARLOWE 5.0, this simulator has been extensively verified by comparing the integrated one-dimensional simulation results with secondary ion mass spectroscopy (SIMS). Two options, the typical case and the worst scenario, have been selected to simulate ion implantation into poly-silicon under various scenarios using this simulator: implantation into a random, amorphous network, and implantation into the worst-case channeling condition, into (1 1 0) orientated wafers
Eslam Kashi
2015-04-01
Full Text Available In some instances, it is inevitable that large amounts of potentially hazardous chemicals like chlorine gas are stored and used in facilities in densely populated areas. In such cases, all safety issues must be carefully considered. To reach this goal, it is important to have accurate information concerning chlorine gas behaviors and how it is dispersed in dense urban areas. Furthermore, maintaining adequate air movement and the ability to purge ambient from potential toxic and dangerous chemicals like chlorine gas could be helpful. These are among the most important actions to be taken toward the improvement of safety in a big metropolis like Tehran. This paper investigates and analyzes chlorine gas leakage scenarios, including its dispersion and natural air ventilation effects on how it might be geographically spread in a city, using computational fluid dynamic (CFD. Simulations of possible hazardous events and solutions for preventing or reducing their probability are presented to gain a better insight into the incidents. These investigations are done by considering hypothetical scenarios which consist of chlorine gas leakages from pipelines or storage tanks under different conditions. These CFD simulation results are used to investigate and analyze chlorine gas behaviors, dispersion, distribution, accumulation, and other possible hazards by means of a simplified CAD model of an urban area near a water-treatment facility. Possible hazards as well as some prevention and post incident solutions are also suggested.
Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.
2017-12-01
The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the
Computing Analysis of Bearing Elements of Launch Complex Aggregates for Space Rocket "Soyuz-2.1v"
V. A. Zverev
2014-01-01
Full Text Available The research is devoted to the computational analysis of bearing structures of launch system aggregates, which are designed for the prelaunch preparation and launch security of space rocket (SR "SOYUZ-2" of 1B stage. The bearing structures taken under consideration are the following: supporting trusses (ST, bearing arms (BA, the upper cable girder (UCG, the umbilical mast (UM. The SR “SOYUZ-2" of 1B stage has the characteristics of the propulsion unit (PU thrust, different from those of the "Soyuz" family space rockets exploited before.The paper presents basic modeling principles to calculate units and their operating loadings. The body self-weight and the influence of a gas-dynamic jet of "SOYUZ-2.1B" propulsion unit have been considered as a load of units. Parameters of this influence are determined on the basis of impulse stream fields and of deceleration temperatures calculated for various SR positions according to the specified path of its ascent and demolition.Physical models of the aggregates and calculations are based on the finite elements method and super-elements method using “SADAS” software package developed at the chair SM8 of Bauman Moscow State Technical University.Fields of nodal temperatures distribution in the ST, BA, UCG, UM models, and fields of tension in finite elements as well represent the calculation results.Obtained results revealed the most vulnerable of considered starting system aggregates, namely UM, which was taken for local durability calculation. As an example, this research considers calculation of local durability in the truss branches junction of UM rotary part, for which the constructive strengthening has been offered. For this node a detailed finite-element model built in the model of UM rotary part has been created. Calculation results of local durability testify that the strengthened node meets durability conditions.SR developers used calculation results of launch system aggregates for the space
Suh, Young Joo; Han, Kyunghwa; Chang, Suyon; Kim, Jin Young; Im, Dong Jin; Hong, Yoo Jin; Lee, Hye-Jeong; Hur, Jin; Kim, Young Jin; Choi, Byoung Wook
2017-09-01
The SYNergy between percutaneous coronary intervention with TAXus and cardiac surgery (SYNTAX) score is an invasive coronary angiography (ICA)-based score for quantifying the complexity of coronary artery disease (CAD). Although the SYNTAX score was originally developed based on ICA, recent publications have reported that coronary computed tomography angiography (CCTA) is a feasible modality for the estimation of the SYNTAX score.The aim of our study was to investigate the prognostic value of the SYNTAX score, based on CCTA for the prediction of major adverse cardiac and cerebrovascular events (MACCEs) in patients with complex CAD.The current study was approved by the institutional review board of our institution, and informed consent was waived for this retrospective cohort study. We included 251 patients (173 men, mean age 66.0 ± 9.29 years) who had complex CAD [3-vessel disease or left main (LM) disease] on CCTA. SYNTAX score was obtained on the basis of CCTA. Follow-up clinical outcome data regarding composite MACCEs were also obtained. Cox proportional hazards models were developed to predict the risk of MACCEs based on clinical variables, treatment, and computed tomography (CT)-SYNTAX scores.During the median follow-up period of 1517 days, there were 48 MACCEs. Univariate Cox hazards models demonstrated that MACCEs were associated with advanced age, low body mass index (BMI), and dyslipidemia (P < .2). In patients with LM disease, MACCEs were associated with a higher SYNTAX score. In patients with CT-SYNTAX score ≥23, patients who underwent coronary artery bypass graft surgery (CABG) and percutaneous coronary intervention had significantly lower hazard ratios than patients who were treated with medication alone. In multivariate Cox hazards model, advanced age, low BMI, and higher SYNTAX score showed an increased hazard ratio for MACCE, while treatment with CABG showed a lower hazard ratio (P < .2).On the basis of our results, CT-SYNTAX score
Blacher, Jonathan; Van DaHuvel, Scott; Parashar, Vijay; Mitchell, John C
2016-03-01
The inferior alveolar nerve (IAN) injection is 1 of the most commonly administered and useful injections in the field of dentistry. Practitioners use intraoral anatomic landmarks, which vary greatly among patients. The objective of this study was to assist practitioners by identifying a range of normal variability within certain landmarks used in delivering IAN anesthesia. A total of 203 randomly selected retrospective cone-beam computed tomographic scans were obtained from the Midwestern University Dental Institute cone-beam computed tomographic database. InVivoDental5.0 volumetric imaging software (Anatomage, San Jose, CA) was used to measure 2 important parameters used in locating the mandibular foramen (MF)/IAN complex: (1) the angle from the contralateral premolar contact area to the MF and (2) the distance above the mandibular occlusal plane to the center of the MF. The variation of these measurements was compared with established reference values and statistically analyzed using a 1-sample t test. The angle from the contralateral premolar contact area to the MF for the right and left sides was 42.99° and 42.57°, respectively. The angulations varied significantly from the reference value of 45° (P < .001). The minimum height above the mandibular occlusal plane for the right and left sides was 9.85 mm and 9.81 mm, respectively. The heights varied significantly from the minimum reference value of 6 mm but not the maximum reference value of 10 mm (P < .001). Orienting the syringe barrel at an angulation slightly less than 45° and significantly higher than 6 mm above the mandibular occlusal plane can aid in successfully administering anesthesia to the MF/IAN complex. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Thiyagalingam, Jeyarajan
2013-06-01
In this paper, we present a novel visualization technique for assisting the observation and analysis of algorithmic complexity. In comparison with conventional line graphs, this new technique is not sensitive to the units of measurement, allowing multivariate data series of different physical qualities (e.g., time, space and energy) to be juxtaposed together conveniently and consistently. It supports multivariate visualization as well as uncertainty visualization. It enables users to focus on algorithm categorization by complexity classes, while reducing visual impact caused by constants and algorithmic components that are insignificant to complexity analysis. It provides an effective means for observing the algorithmic complexity of programs with a mixture of algorithms and black-box software through visualization. Through two case studies, we demonstrate the effectiveness of complexity plots in complexity analysis in research, education and application. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.
Hölscher, Markus; Leitner, Walter
2017-09-07
While industrial NH 3 synthesis based on the Haber-Bosch-process was invented more than a century ago, there is still no molecular catalyst available which reduces N 2 in the reaction system N 2 /H 2 to NH 3 . As the many efforts of experimentally working research groups to develop a molecular catalyst for NH 3 synthesis from N 2 /H 2 have led to a variety of stoichiometric reductions it seems justified to undertake the attempt of systematizing the various approaches of how the N 2 molecule might be reduced to NH 3 with H 2 at a transition metal complex. In this contribution therefore a variety of intuition-based concepts are presented with the intention to show how the problem can be approached. While no claim for completeness is made, these concepts intend to generate a working plan for future research. Beyond this, it is suggested that these concepts should be evaluated with regard to experimental feasibility by checking barrier heights of single reaction steps and also by computation of whole catalytic cycles employing density functional theory (DFT) calculations. This serves as a tool which extends the empirically driven search process and expands it by computed insights which can be used to rationalize the various challenges which must be met. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
Geist, G.A.; Wood, R.F.
1985-11-01
The conceptual foundation of a computational model and a computer program based on it have been developed for treating various aspects of the complex melting and solidification behavior observed in pulsed laser-irradiated materials. A particularly important feature of the modeling is the capability of allowing melting and solidification to occur at temperatures other than the thermodynamic phase change temperatures. As a result, interfacial undercooling and overheating can be introduced and various types of nucleation events can be simulated. Calculations on silicon with the model have shown a wide variety of behavior, including the formation and propagation of multiple phase fronts. Although originally developed as a tool for studying certain problems arising in the field of laser annealing of semiconductors, the program should be useful in treating many types of systems in which phase changes and nucleation phenomena play important roles. This report describes the underlying physical and mathematical ideas and the basic relations used in LASER8. It also provides enough specific and detailed information on the program to serve as a guide for its use; a listing of one version of the program is given
Fukuda, Hironobu; Imataka, George; Drago, Fabrizio; Maeda, Kosaku; Yoshihara, Shigemi
2017-09-01
We report a case of a 10-year-old female patient who survived ring-sling complex without surgery. The patient had congenital wheezing from the neonatal period and was treated after a tentative diagnosis of infantile asthma. The patient suffered from allergy and was hospitalized several times due to severe wheezing, and when she was 22 months old, she was diagnosed with ring-sling complex. We used a segmental 4 mm internal diameter of the trachea for 3-dimensional computed tomography (3D-CT). Bronchial asthma is considered an exacerbating factor in infantile period and frequently required treatment with bronchodilator. After the age of 10, the patient had recurrent breathing difficulties during physical activity and during night time, and this condition was assessed to be related to the pressure from the blood vessel on the ring. We repeated the 3D-CT evaluation later and discovered that the internal diameter of the trachea had grown to 5 mm. Eventually, patient's breathing difficulties disappeared after the treatment of bronchial asthma and restriction of physical activities. Our patient remained in stable condition without undergoing any surgical procedures even after she passed the age of 10.
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido
2017-05-16
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the
Deutsch, D.
1992-01-01
As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)
L. V. Savkin
2015-01-01
Full Text Available One of the problems in implementation of the multipurpose complete systems based on the reconfigurable computing fields (RCF is the problem of optimum redistribution of logicalarithmetic resources in growing scope of functional tasks. Irrespective of complexity, all of them are transformed into an orgraph, which functional and topological structure is appropriately imposed on the RCF based, as a rule, on the field programmable gate array (FPGA.Due to limitation of the hardware configurations and functions realized by means of the switched logical blocks (SLB, the abovementioned problem becomes even more critical when there is a need, within the strictly allocated RCF fragment, to realize even more complex challenge in comparison with the problem which was solved during the previous computing step. In such cases it is possible to speak about graphs of big dimensions with respect to allocated RCF fragment.The article considers this problem through development of diagnostic algorithms to implement diagnostics and control of an onboard control complex of the spacecraft using RCF. It gives examples of big graphs arising with respect to allocated RCF fragment when forming the hardware levels of a diagnostic model, which, in this case, is any hardware-based algorithm of diagnostics in RCF.The article reviews examples of arising big graphs when forming the complicated diagnostic models due to drastic difference in formation of hardware levels on closely located RCF fragments. It also pays attention to big graphs emerging when the multichannel diagnostic models are formed.Three main ways to solve the problem of big graphs with respect to allocated RCF fragment are given. These are: splitting the graph into fragments, use of pop-up windows with relocating and memorizing intermediate values of functions of high hardware levels of diagnostic models, and deep adaptive update of diagnostic model.It is shown that the last of three ways is the most efficient
Computing handbook computer science and software engineering
Gonzalez, Teofilo; Tucker, Allen
2014-01-01
Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala
New computational paradigms changing conceptions of what is computable
Cooper, SB; Sorbi, Andrea
2007-01-01
This superb exposition of a complex subject examines new developments in the theory and practice of computation from a mathematical perspective. It covers topics ranging from classical computability to complexity, from biocomputing to quantum computing.
Rui Sun
2017-09-01
Full Text Available Entropy-based algorithms have been suggested as robust estimators of electroencephalography (EEG predictability or regularity. This study aimed to examine possible disturbances in EEG complexity as a means to elucidate the pathophysiological mechanisms in chronic stroke, before and after a brain computer interface (BCI-motor observation intervention. Eleven chronic stroke subjects and nine unimpaired subjects were recruited to examine the differences in their EEG complexity. The BCI-motor observation intervention was designed to promote functional recovery of the hand in stroke subjects. Fuzzy approximate entropy (fApEn, a novel entropy-based algorithm designed to evaluate complexity in physiological systems, was applied to assess the EEG signals acquired from unimpaired subjects and stroke subjects, both before and after training. The results showed that stroke subjects had significantly lower EEG fApEn than unimpaired subjects (p < 0.05 in the motor cortex area of the brain (C3, C4, FC3, FC4, CP3, and CP4 in both hemispheres before training. After training, motor function of the paretic upper limb, assessed by the Fugl-Meyer Assessment-Upper Limb (FMA-UL, Action Research Arm Test (ARAT, and Wolf Motor Function Test (WMFT improved significantly (p < 0.05. Furthermore, the EEG fApEn in stroke subjects increased considerably in the central area of the contralesional hemisphere after training (p < 0.05. A significant correlation was noted between clinical scales (FMA-UL, ARAT, and WMFT and EEG fApEn in C3/C4 in the contralesional hemisphere (p < 0.05. This finding suggests that the increase in EEG fApEn could be an estimator of the variance in upper limb motor function improvement. In summary, fApEn can be used to identify abnormal EEG complexity in chronic stroke, when used with BCI-motor observation training. Moreover, these findings based on the fApEn of EEG signals also expand the existing interpretation of training-induced functional
Pérez-Landero, Sergio; Sandoval-Motta, Santiago; Martínez-Anaya, Claudia; Yang, Runying; Folch-Mallol, Jorge Luis; Martínez, Luz María; Ventura, Larissa; Guillén-Navarro, Karina; Aldana-González, Maximino; Nieto-Sotelo, Jorge
2015-07-27
The cAMP-dependent protein kinase regulatory network (PKA-RN) regulates metabolism, memory, learning, development, and response to stress. Previous models of this network considered the catalytic subunits (CS) as a single entity, overlooking their functional individualities. Furthermore, PKA-RN dynamics are often measured through cAMP levels in nutrient-depleted cells shortly after being fed with glucose, dismissing downstream physiological processes. Here we show that temperature stress, along with deletion of PKA-RN genes, significantly affected HSE-dependent gene expression and the dynamics of the PKA-RN in cells growing in exponential phase. Our genetic analysis revealed complex regulatory interactions between the CS that influenced the inhibition of Hsf1/Skn7 transcription factors. Accordingly, we found new roles in growth control and stress response for Hsf1/Skn7 when PKA activity was low (cdc25Δ cells). Experimental results were used to propose an interaction scheme for the PKA-RN and to build an extension of a classic synchronous discrete modeling framework. Our computational model reproduced the experimental data and predicted complex interactions between the CS and the existence of a repressor of Hsf1/Skn7 that is activated by the CS. Additional genetic analysis identified Ssa1 and Ssa2 chaperones as such repressors. Further modeling of the new data foresaw a third repressor of Hsf1/Skn7, active only in the absence of Tpk2. By averaging the network state over all its attractors, a good quantitative agreement between computational and experimental results was obtained, as the averages reflected more accurately the population measurements. The assumption of PKA being one molecular entity has hindered the study of a wide range of behaviors. Additionally, the dynamics of HSE-dependent gene expression cannot be simulated accurately by considering the activity of single PKA-RN components (i.e., cAMP, individual CS, Bcy1, etc.). We show that the differential
Rosenthal, L E
1986-10-01
Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.
Gandhi, A; Kathuria, A; Gandhi, T
2011-06-01
To present the successful endodontic and periodontal management of a two rooted maxillary lateral incisor tooth with a complex radicular lingual groove and severe periodontal destruction using spiral computed tomography as a diagnostic aid. A 30-year-old male patient presented with a chief complaint of mobility and discharge of pus in an upper front tooth. Clinical examination revealed a sinus tract on the labial gingival surface and a 10-mm-deep periodontal pocket associated with maxillary left lateral incisor tooth. On the lingual side, a groove emerging from cingulum, continuing mesioapically down the lingual aspect of tooth was found. Intraoral periapical radiographs demonstrated a lateral periodontal defect around the mesial aspect and a diffuse radiolucency at the apex of maxillary left lateral incisor tooth. The sinus tract was traced with gutta-percha to the maxillary left lateral incisor that showed an accessory root surrounded by a large radiolucent area. A spiral computed tomographic scan for better understanding of the complicated root canal morphology of the tooth was performed. Based on the clinical, radiographic and spiral computed tomographic findings, a diagnosis of an endo-perio lesion in tooth 22 was made. Management consisted of conventional root canal treatment, radiculoplasty, root resection of accessory root and surgical curettage of the periodontal defect. Follow-up with radiographic examination at 3 months and 1 year was performed. At 1-year recall, the patient was asymptomatic, there was no evidence of the sinus tract and a 3-mm nonbleeding pocket was present in relation to tooth 22. Progression of hard tissue healing was observed in the periapical radiograph taken 1 year postoperatively. The key to achieving favourable results in this particular type of developmental anomaly is accurate diagnosis and treatment planning. The health of the periapical osseous tissues appears to be the provital factor for tooth retention. A favourable outcome
Zhao, Lei; Gossmann, Toni I; Waxman, David
2016-03-21
The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have
Balaev, V. V.; Lashkov, A. A., E-mail: alashkov83@gmail.com; Gabdoulkhakov, A. G.; Seregina, T. A.; Dontsova, M. V.; Mikhailov, A. M. [Russian Academy of Sciences, Shubnikov Institute of Crystallography (Russian Federation)
2015-03-15
Pseudotuberculosis and bubonic plague are acute infectious diseases caused by the bacteria Yersinia pseudotuberculosis and Yersinia pestis. These diseases are treated, in particular, with trimethoprim and its modified analogues. However, uridine phosphorylases (pyrimidine nucleoside phosphorylases) that are present in bacterial cells neutralize the action of trimethoprim and its modified analogues on the cells. In order to reveal the character of the interaction of the drug with bacterial uridine phosphorylase, the atomic structure of the unligated molecule of uridine-specific pyrimidine nucleoside phosphorylase from Yersinia pseudotuberculosis (YptUPh) was determined by X-ray diffraction at 1.7 Å resolution with high reliability (R{sub work} = 16.2, R{sub free} = 19.4%; r.m.s.d. of bond lengths and bond angles are 0.006 Å and 1.005°, respectively; DPI = 0.107 Å). The atoms of the amino acid residues of the functionally important secondary-structure elements—the loop L9 and the helix H8—of the enzyme YptUPh were located. The three-dimensional structure of the complex of YptUPh with modified trimethoprim—referred to as 53I—was determined by the computer simulation. It was shown that 53I is a pseudosubstrate of uridine phosphorylases, and its pyrimidine-2,4-diamine group is located in the phosphate-binding site of the enzyme YptUPh.
Balaev, V. V.; Lashkov, A. A.; Gabdoulkhakov, A. G.; Seregina, T. A.; Dontsova, M. V.; Mikhailov, A. M.
2015-01-01
Pseudotuberculosis and bubonic plague are acute infectious diseases caused by the bacteria Yersinia pseudotuberculosis and Yersinia pestis. These diseases are treated, in particular, with trimethoprim and its modified analogues. However, uridine phosphorylases (pyrimidine nucleoside phosphorylases) that are present in bacterial cells neutralize the action of trimethoprim and its modified analogues on the cells. In order to reveal the character of the interaction of the drug with bacterial uridine phosphorylase, the atomic structure of the unligated molecule of uridine-specific pyrimidine nucleoside phosphorylase from Yersinia pseudotuberculosis (YptUPh) was determined by X-ray diffraction at 1.7 Å resolution with high reliability (R work = 16.2, R free = 19.4%; r.m.s.d. of bond lengths and bond angles are 0.006 Å and 1.005°, respectively; DPI = 0.107 Å). The atoms of the amino acid residues of the functionally important secondary-structure elements—the loop L9 and the helix H8—of the enzyme YptUPh were located. The three-dimensional structure of the complex of YptUPh with modified trimethoprim—referred to as 53I—was determined by the computer simulation. It was shown that 53I is a pseudosubstrate of uridine phosphorylases, and its pyrimidine-2,4-diamine group is located in the phosphate-binding site of the enzyme YptUPh
Balaev, V. V.; Lashkov, A. A.; Gabdoulkhakov, A. G.; Seregina, T. A.; Dontsova, M. V.; Mikhailov, A. M.
2015-03-01
Pseudotuberculosis and bubonic plague are acute infectious diseases caused by the bacteria Yersinia pseudotuberculosis and Yersinia pestis. These diseases are treated, in particular, with trimethoprim and its modified analogues. However, uridine phosphorylases (pyrimidine nucleoside phosphorylases) that are present in bacterial cells neutralize the action of trimethoprim and its modified analogues on the cells. In order to reveal the character of the interaction of the drug with bacterial uridine phosphorylase, the atomic structure of the unligated molecule of uridine-specific pyrimidine nucleoside phosphorylase from Yersinia pseudotuberculosis ( YptUPh) was determined by X-ray diffraction at 1.7 Å resolution with high reliability ( R work = 16.2, R free = 19.4%; r.m.s.d. of bond lengths and bond angles are 0.006 Å and 1.005°, respectively; DPI = 0.107 Å). The atoms of the amino acid residues of the functionally important secondary-structure elements—the loop L9 and the helix H8—of the enzyme YptUPh were located. The three-dimensional structure of the complex of YptUPh with modified trimethoprim—referred to as 53I—was determined by the computer simulation. It was shown that 53I is a pseudosubstrate of uridine phosphorylases, and its pyrimidine-2,4-diamine group is located in the phosphate-binding site of the enzyme YptUPh.
NSC KIPT accelerator on nuclear and high energy physics
Guk, I.S.; Dovbnya, A.N.; Kononenko, S.G.; Tarasenko, A.S.; Botman, J.I.M.; Wiel, van der M.J.
2004-01-01
One of the main reasons for the outflow of experts in nuclear physics and adjacent areas of science from Ukraine is the absence of modern accelerating facilities, for conducting research in the present fields of interest worldwide in this area of knowledge. A qualitatively new level of research can
NSC KIPT accelerator on nuclear and high energy physics
Dovbnya, A.N.; Guk, I.S.; Kononenko, S.G.; Wiel, van der M.J.; Botman, J.I.M.; Tarasenko, A.S.
2004-01-01
Qualitatively new level can be performed by creating the accelerator that will incorporate the latest technological achievements in the field of electron beam acceleration on the basis of a superconducting TESLA accelerating structure. This structure permits the production of both quasi-continuous
Würtz, Rolf P
2008-01-01
Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.
Masato Kiyoshi
Full Text Available The optimization of antibodies is a desirable goal towards the development of better therapeutic strategies. The antibody 11K2 was previously developed as a therapeutic tool for inflammatory diseases, and displays very high affinity (4.6 pM for its antigen the chemokine MCP-1 (monocyte chemo-attractant protein-1. We have employed a virtual library of mutations of 11K2 to identify antibody variants of potentially higher affinity, and to establish benchmarks in the engineering of a mature therapeutic antibody. The most promising candidates identified in the virtual screening were examined by surface plasmon resonance to validate the computational predictions, and to characterize their binding affinity and key thermodynamic properties in detail. Only mutations in the light-chain of the antibody are effective at enhancing its affinity for the antigen in vitro, suggesting that the interaction surface of the heavy-chain (dominated by the hot-spot residue Phe101 is not amenable to optimization. The single-mutation with the highest affinity is L-N31R (4.6-fold higher affinity than wild-type antibody. Importantly, all the single-mutations showing increase affinity incorporate a charged residue (Arg, Asp, or Glu. The characterization of the relevant thermodynamic parameters clarifies the energetic mechanism. Essentially, the formation of new electrostatic interactions early in the binding reaction coordinate (transition state or earlier benefits the durability of the antibody-antigen complex. The combination of in silico calculations and thermodynamic analysis is an effective strategy to improve the affinity of a matured therapeutic antibody.
1988-08-01
This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)
Rasmussen, Lykke
One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...
(1) Standard practice for assessing developmental toxicity is the observation of apical endpoints (intrauterine death, fetal growth retardation, structural malformations) in pregnant rats/rabbits following exposure during organogenesis. EPA’s computational toxicology research pro...
Pérez, Serge; Tubiana, Thibault; Imberty, Anne; Baaden, Marc
2015-05-01
A molecular visualization program tailored to deal with the range of 3D structures of complex carbohydrates and polysaccharides, either alone or in their interactions with other biomacromolecules, has been developed using advanced technologies elaborated by the video games industry. All the specific structural features displayed by the simplest to the most complex carbohydrate molecules have been considered and can be depicted. This concerns the monosaccharide identification and classification, conformations, location in single or multiple branched chains, depiction of secondary structural elements and the essential constituting elements in very complex structures. Particular attention was given to cope with the accepted nomenclature and pictorial representation used in glycoscience. This achievement provides a continuum between the most popular ways to depict the primary structures of complex carbohydrates to visualizing their 3D structures while giving the users many options to select the most appropriate modes of representations including new features such as those provided by the use of textures to depict some molecular properties. These developments are incorporated in a stand-alone viewer capable of displaying molecular structures, biomacromolecule surfaces and complex interactions of biomacromolecules, with powerful, artistic and illustrative rendering methods. They result in an open source software compatible with multiple platforms, i.e., Windows, MacOS and Linux operating systems, web pages, and producing publication-quality figures. The algorithms and visualization enhancements are demonstrated using a variety of carbohydrate molecules, from glycan determinants to glycoproteins and complex protein-carbohydrate interactions, as well as very complex mega-oligosaccharides and bacterial polysaccharides and multi-stranded polysaccharide architectures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e
Cardoso, Joao MP
2011-01-01
As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp
Au, Vonika Ka-Man; Lam, Wai Han; Wong, Wing-Tak; Yam, Vivian Wing-Wah
2012-07-16
A novel class of luminescent gold(III) complexes containing various tridentate cyclometalating ligands derived from 6-phenyl-2,2'-bipyridine and alkynyl ligands, [Au(RC^N^N)(C≡C-R')]PF(6), has been successfully synthesized and characterized. One of the complexes has also been determined by X-ray crystallography. Electrochemical studies show a ligand-centered reduction originated from the cyclometalating RC^N^N ligands as well as an alkynyl-centered oxidation. The electronic absorption and photoluminescence properties of the complexes have also been investigated. In acetonitrile at room temperature, the complexes show intense absorption at higher energy region with wavelength shorter than 320 nm, and a moderately intense broad absorption band at 374-406 nm, assigned as the metal-perturbed intraligand π-π* transition of the cyclometalating RC(∧)N(∧)N ligand, with some charge transfer character from the aryl ring to the bipyridine moiety. Most of the complexes have been observed to show vibronic-structured emission bands at 469-550 nm in butyronitile glass at 77 K, assigned to an intraligand excited state of the RC^N^N ligand, with some charge transfer character from the aryl to the bipyridyine moiety. Insights into the origin of the absorption and emission have also been provided by density functional theory (DFT) and time-dependent density functional theory (TDDFT) calculations.
Tabti, Salima; Djedouani, Amel; Aggoun, Djouhra; Warad, Ismail; Rahmouni, Samra; Romdhane, Samir; Fouzi, Hosni
2018-03-01
The reaction of nickel(II), copper(II) and cobalt(II) with 4-hydroxy-3-[(2E)-3-(1H-indol-3-yl)prop-2-enoyl]-6-methyl-2H-pyran-2-one (HL) leads to a series of new complexes: Ni(L)2(NH3), Cu(L)2(DMF)2 and Co(L)2(H2O). The crystal structure of the Cu(L)2(DMF)2 complex have been determined by X-ray diffraction methods. The Cu(II) lying on an inversion centre is coordinated to six oxygen atoms forming an octahedral elongated. Additionally, the electrochemical behavior of the metal complexes were investigated by cyclic voltammetry at a glassy carbon electrode (GC) in CH3CN solutions, showing the quasi-reversible redox process ascribed to the reduction of the MII/MI couples. The X-ray single crystal structure data of the complex was matched excellently with the optimized monomer structure of the desired compound; Hirschfeld surface analysis supported the packed crystal lattice 3D network intermolecular forces. HOMO/LUMO energy level and the global reactivity descriptors quantum parameters are also calculated. The electrophilic and nucleophilic potions in the complex surface are theoretically evaluated by molecular electrostatic potential and Mulliken atomic charges analysis.
Sarma, Monima; Chatterjee, Tanmay; Bodapati, Ramakrishna; Krishnakanth, Katturi Naga; Hamad, Syed; Rao, S Venugopal; Das, Samar K
2016-04-04
This article demonstrates a series of cyclometalated Ir(III) complexes of the type [Ir(III)(C^N)2(N^N)](PF6), where C^N is 2-phenylpyridine, and N^N corresponds to the 4,4'-π-conjugated 2,2'-bipyridine ancillary ligands. All these compounds were synthesized through splitting of the binuclear dichloro-bridged complex precursor, [Ir(C^N)2(μ-Cl)]2, with the appropriate bipyridine ligands followed by the anion exchange reaction. The linear and nonlinear absorption properties of the synthesized complexes were investigated. The absorption spectra of all the title complexes exhibit a broad structureless feature in the spectral region of 350-700 nm with two bands being well-resolved in most of the cases. The structures of all the compounds were modeled in dichloromethane using the density functional theory (DFT) algorithm. The nature of electronic transitions was further comprehended on the basis of time-dependent DFT analysis, which indicates that the origins of various bands are primarily due to intraligand charge transfer transitions along with mixed-metal and ligand-centered transitions. The synthesized compounds are found to be nonemissive at room temperature because of probable nonradiative deactivation pathways of the T1 state that compete with the radiative (phosphorescence) decay modes. However, the frozen solutions of compounds Ir(MS 3) and Ir(MS 5) phosphoresce at the near-IR region, the other complexes remaining nonemissive up to 800 nm wavelength window. The two-photon absorption studies on the synthesized complexes reveal that values of the absorption cross-section are quite notable and lie in the range of 300-1000 GM in the picosecond case and 45-186 GM in the femtosecond case.
Chval, Zdeněk; Futera, Zdeněk; Burda, Jaroslav V.
2011-01-01
The hydration process for two Ru(II) representative half-sandwich complexes: Ru(arene)(pta)Cl2 (from the RAPTA family) and [Ru(arene)(en)Cl]+ (further labeled as Ru_en) were compared with analogous reaction of cisplatin. In the study, quantum chemical methods were employed. All the complexes were optimized at the B3LYP/6-31G(d) level using Conductor Polarizable Continuum Model (CPCM) solvent continuum model and single-point (SP) energy calculations and determination of electronic properties were performed at the B3LYP/6-311++G(2df,2pd)/CPCM level. It was found that the hydration model works fairly well for the replacement of the first chloride by water where an acceptable agreement for both Gibbs free energies and rate constants was obtained. However, in the second hydration step worse agreement of the experimental and calculated values was achieved. In agreement with experimental values, the rate constants for the first step can be ordered as RAPTA-B > Ru_en > cisplatin. The rate constants correlate well with binding energies (BEs) of the Pt/Ru-Cl bond in the reactant complexes. Substitution reactions on Ru_en and cisplatin complexes proceed only via pseudoassociative (associative interchange) mechanism. On the other hand in the case of RAPTA there is also possible a competitive dissociation mechanism with metastable pentacoordinated intermediate. The first hydration step is slightly endothermic for all three complexes by 3-5 kcal/mol. Estimated BEs confirm that the benzene ligand is relatively weakly bonded assuming the fact that it occupies three coordination positions of the Ru(II) cation.
Dziembowski, Stefan
here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...
Wang, Xiao-Jing; Krystal, John H.
2014-01-01
Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941
Designing with computational intelligence
Lopes, Heitor; Mourelle, Luiza
2017-01-01
This book discusses a number of real-world applications of computational intelligence approaches. Using various examples, it demonstrates that computational intelligence has become a consolidated methodology for automatically creating new competitive solutions to complex real-world problems. It also presents a concise and efficient synthesis of different systems using computationally intelligent techniques.
Design of Computer Experiments
Dehlendorff, Christian
The main topic of this thesis is design and analysis of computer and simulation experiments and is dealt with in six papers and a summary report. Simulation and computer models have in recent years received increasingly more attention due to their increasing complexity and usability. Software...... packages make the development of rather complicated computer models using predefined building blocks possible. This implies that the range of phenomenas that are analyzed by means of a computer model has expanded significantly. As the complexity grows so does the need for efficient experimental designs...... and analysis methods, since the complex computer models often are expensive to use in terms of computer time. The choice of performance parameter is an important part of the analysis of computer and simulation models and Paper A introduces a new statistic for waiting times in health care units. The statistic...
Advances in unconventional computing
2017-01-01
The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices. This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing. The book is the encyclopedia, the first ever complete autho...
Wategaonkar, Sanjay; Bhattacherjee, Aditi
2018-05-03
The N-H···S hydrogen bond, even though classified as an unconventional hydrogen bond, is found to bear important structural implications on protein structure and folding. In this article, we report a gas-phase study of the N-H···S hydrogen bond between the model compounds of histidine (benzimidazole, denoted BIM) and methionine (dimethyl sulfide, diethyl sulfide, and tetrahydrothiophene, denoted Me 2 S, Et 2 S, and THT, respectively). A combination of laser spectroscopic methods such as laser-induced fluorescence (LIF), two-color resonant two-photon ionization (2cR2PI), and fluorescence depletion by infrared spectroscopy (FDIR) is used in conjunction with DFT and ab initio calculations to characterize the nature of this prevalent H-bonding interaction in simple bimolecular complexes. A single conformer was found to exist for the BIM-Me 2 S complex, whereas the BIM-Et 2 S and BIM-THT complexes showed the presence of three and two conformers, respectively. These conformers were characterized on the basis of IR spectroscopic results and electronic structure calculations. Quantum theory of atoms in molecules (QTAIM), natural bond orbital (NBO), and energy decomposition (NEDA) analyses were performed to investigate the nature of the N-H···S H-bond. Comparison of the results with the N-H···O type of interactions in BIM and indole revealed that the strength of the N-H···S H-bond is similar to N-H···O in these binary gas-phase complexes.
Zaharieva, I; Chernev, P; Risch, M; Gerencser, L; Haumann, M; Dau, H [Free University Berlin, FB Physik, Arnimallee 14, D-14195 Berlin (Germany); Berggren, G; Shevchenko, D; Anderlund, M [Dept. of Photochemistry and Molecular Science, Uppsala University, Box 523, S-751 20 Uppsala (Sweden); Weng, T C, E-mail: holger.dau@fu-berlin.d, E-mail: michael.haumann@fu-berlin.d [European Synchrotron Radiation Facility, BP 220, F-38043 Grenoble Cedex (France)
2009-11-15
Advanced X-ray spectroscopy experiments can contribute to elucidation of the mechanism of water oxidation in biological (tetra-manganese complex of Photosystem II) and artificial systems. Although the electronic structure of the catalytic metal site is of high interest, it is experimentally not easily accessible. Therefore, we and other researchers are working towards a comprehensive approach involving a combination of methods, namely (1) quantitative analysis of X-ray absorption near-edge structure (XANES) spectra collected at the K-edge and, in the long run, at the L-edge of manganese; (2) high-resolution X-ray emission spectroscopy (XES) of K{alpha} and K{beta} lines, (3) two-dimensional resonant inelastic X-ray scattering (RIXS) spectra. Collection of these spectroscopic data sets requires state-of-the-art synchrotron radiation facilities as well as experimental strategies to minimize the radiation-induced modifications of the samples. Data analysis requires the use and development of appropriate theoretical tools. Here, we present exemplary data collected for three multi-nuclear synthetic Mn complexes with the Mn ions in the oxidation states II, III, and IV, and for Mn{sup VII} of the permanganate ion. Emission spectra are calculated for the Mn{sup VII} ion using both multiple-scattering (MS) approach and time-dependent density functional theory (TDDFT).
Rosendahl, Mads
1989-01-01
One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...
Altürk, Sümeyye; Avcı, Davut; Başoğlu, Adil; Tamer, Ömer; Atalay, Yusuf; Dege, Necmi
2018-02-05
Crystal structure of the synthesized copper(II) complex with 6-methylpyridine-2-carboxylic acid, [Cu(6-Mepic) 2 ·H 2 O]·H 2 O, was determined by XRD, FT-IR and UV-Vis spectroscopic techniques. Furthermore, the geometry optimization, harmonic vibration frequencies for the Cu(II) complex were carried out by using Density Functional Theory calculations with HSEh1PBE/6-311G(d,p)/LanL2DZ level. Electronic absorption wavelengths were obtained by using TD-DFT/HSEh1PBE/6-311G(d,p)/LanL2DZ level with CPCM model and major contributions were determined via Swizard/Chemissian program. Additionally, the refractive index, linear optical (LO) and non-nonlinear optical (NLO) parameters of the Cu(II) complex were calculated at HSEh1PBE/6-311G(d,p) level. The experimental and computed small energy gap shows the charge transfer in the Cu(II) complex. Finally, the hyperconjugative interactions and intramolecular charge transfer (ICT) were studied by performing of natural bond orbital (NBO) analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Bartzis, J G; Megaritou, A; Belessiotis, V
1987-09-01
THEAP-I is a computer code developed in NRCPS `DEMOCRITUS` with the aim to contribute to the safety analysis of the open pool research reactors. THEAP-I is designed for three dimensional, transient thermal/hydraulic analysis of a thermally interacting channel bundle totally immersed into water or air, such as the reactor core. In the present report the mathematical and physical models and methods of the solution are given as well as the code description and the input data. A sample problem is also included, refering to the Greek Research Reactor analysis, under an hypothetical severe loss of coolant accident.
S. V. Kotov
2015-01-01
Full Text Available Background: Efficacy of physical exercise and movement imagination for restoration of motor dysfunction after a stroke is seen as proven. However, the use of movement imagination is complicated by impossibility of objective and subjective control over the exercise, as well as by the absence of their motor support. The brain-computer interface based on electroencephalography is a technique that enables a feedback during movement imagination.Materials and methods: We assessed 10 patients (6 men and 4 women aged from 30 to 66 years (mean age, 47 ± 7.7 years with an ischemic (n = 9 and hemorrhagic (n = 1 stroke during the last 2 months to 4 years. Online recognition of movement imagination was done by a classifier with a brain computer interface. An exo-skeleton supported passive movements in a paretic hand managed by the brain-computer interface. During 2 weeks the patients had 10 sessions of 45–90 minute duration each. For control, we used data from 5 stroke patients who, in addition to their standard treatment, underwent an imitation of rehabilitation procedures without movement imagination and feedback. To assess efficacy of treatment, we used a modified Ashworth scale, Fugl-Meyer scale, test for evaluation of hand functions ARAT, British scale for assessment of muscle force MRC-SS. Level of everyday activity and working ability was measured with a modified Rankin scale and Bartel index. Cognitive functions were assessed with Schulte tables.Results: Online recognition of movement imagination according to desynchronization of μ rhythm was registered in 50–75% of patients. All patients reported a subjective improvement of motor functions and working ability. Positive results for at least one parameter were observed in all patients; however, there were no significant difference between the parameters before and after rehabilitation procedures, excluding cognitive functions (degree of warming-up, p < 0.02.Conclusion: In post stroke patients
Song, Tao; Lam, Corey; Ng, Dominic C.; Orlova, G.; Laskin, Julia; Fang, De-Cai; Chu, Ivan K.
2009-01-01
The dissociation of [Cu II (L)His] -2+ complexes [L = diethylenetriamine (dien) or 1,4,7-triazacyclononane (9-aneN 3 )] bears a strong resemblance to the previously reported behavior of [Cu II (L)GGH] -2+ complexes. We have used low energy collision-induced dissociation experiments and density functional theory (DFT) calculations at the B3LYP/6-31+G(d) level to study the macrocyclic effect of the auxiliary ligands on the formation of His -+ from prototypical [Cu II (L)His] -2+ systems. DFT revealed that the relative energy barriers of the same electron transfer (ET) dissociation pathways of [Cu II (9-aneN 3 )His] -2+ and [Cu II (dien)His] -2+ are very similar, with the ET reactions of [Cu II (9-aneN 3 )His] -2+ leading to the generation of two distinct His -+ species; in contrast, the proton transfer (PT) dissociation pathways of [Cu II (9-aneN 3 )His] -2+ and [Cu II (dien)His] -2+ differ considerably. The PT reactions of [Cu II (9-aneN 3 )His] -2+ are associated with substantially higher barriers (>13 kcal/mol) than those of [Cu II (dien)His] -2+ . Thus, the sterically encumbered auxiliary 9-aneN3 ligand facilitates ET reactions while moderating PT reactions, allowing the formation of hitherto non-observable histidine radical cations.
Karlheinz Schwarz; Rainer Breitling; Christian Allen
2013-01-01
Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...
Foundations of Neuromorphic Computing
2013-05-01
paradigms: few sensors/complex computations and many sensors/simple computation. Challenges with Nano-enabled Neuromorphic Chips A wide variety of...FOUNDATIONS OF NEUROMORPHIC COMPUTING MAY 2013 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...2009 – SEP 2012 4. TITLE AND SUBTITLE FOUNDATIONS OF NEUROMORPHIC COMPUTING 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER N/A 5c. PROGRAM
Computed tomography for radiographers
Brooker, M.
1986-01-01
Computed tomography is regarded by many as a complicated union of sophisticated x-ray equipment and computer technology. This book overcomes these complexities. The rigid technicalities of the machinery and the clinical aspects of computed tomography are discussed including the preparation of patients, both physically and mentally, for scanning. Furthermore, the author also explains how to set up and run a computed tomography department, including advice on how the room should be designed
Nakamae, Atsuo; Adachi, Nobuo; Nakasa, Tomoyuki; Nishimori, Makoto; Ochi, Mitsuo; Deie, Masataka
2012-01-01
It is desirable to maintain the morphology of the semitendinosus muscle-tendon complex after tendon harvesting for anterior cruciate ligament (ACL) reconstruction. The purpose of this study was to evaluate the effect of knee immobilization on morphological changes in the semitendinosus muscle-tendon complex. In total, 39 patients who underwent ACL reconstruction with autologous semitendinosus tendons were included in this study. After surgery, the knee was immobilized for 3 days in 1 group of patients (group 1; 24 patients; control group) and for a longer period (10-14 days) in the other group (group 2; 15 patients). Three-dimensional computed tomography (3D CT) examination was performed at 6 and/or 12 months after the surgery for all patients. Morphological changes in the semitendinosus muscle-tendon complex (proximal shift of the semitendinosus muscle-tendon junction, width of the regenerated semitendinosus tendons, re-insertion sites of the regenerated tendons, and rate of semitendinosus tendon regeneration) were evaluated. Successful regeneration of the semitendinosus tendon was confirmed in all patients in group 2. In group 1, 3D CT showed that regeneration of the semitendinosus tendon was unsuccessful in 1 of the 24 patients. The average length of the proximal shift of the semitendinosus muscle-tendon junction was 7.3±2.5 cm in group 1 and 7.2±1.9 cm in group 2. There were no significant differences between the 2 groups with regard to the morphological changes in the semitendinosus muscle-tendon complex. This study showed that the structure of regenerated tendons could be clearly identified in 38 of 39 cases (97.4%) after ACL reconstruction. However, prolonged knee immobilization (10-14 days) could not prevent morphological changes in the semitendinosus muscle-tendon complex. (author)
Hanna, Gabriel; Geva, Eitan
2010-01-01
The signature of hydrogen-bond strength on the one- and two-dimensional infrared spectra of the hydrogen-stretch in a hydrogen-bonded complex dissolved in a polar liquid was investigated via mixed quantum-classical molecular dynamics simulations. Non-Condon effects were found to intensify with increasing hydrogen-bond strength and to shift oscillator strength from the stable configurations that correspond to the ionic and covalent tautomers into unstable configurations that correspond to the transition-state between them. The transition-state peak is observed to blue shift and increase in intensity with increasing hydrogen-bond strength, and to dominate the spectra in the case of a strong hydrogen-bond. It is argued that the application of multidimensional infrared spectroscopy in the region of the transition-state peak can provide a uniquely direct probe of the molecular events underlying breaking and forming of hydrogen-bonds in the condensed phase.
Dehghan-Shasaltaneh, Marzieh; Lanjanian, Hossein; Riazi, Gholam Hossein; Masoudi-Nejad, Ali
2018-01-01
Insulin hormone is an important part of the endocrine system. It contains two polypeptide chains and plays a pivotal role in regulating carbohydrate metabolism. Insulin receptors (IR) located on cell surface interacts with insulin to control the intake of glucose. Although several studies have tried to clarify the interaction between insulin and its receptor, the mechanism of this interaction remains elusive because of the receptor's structural complexity and structural changes during the interaction. In this work, we tried to fractionate the interactions. Therefore, sequential docking method utilization of HADDOCK was used to achieve the mentioned goal, so the following processes were done: the first, two pdb files of IR i.e., 3LOH and 3W11 were concatenated using modeller. The second, flexible regions of IR were predicted by HingeProt. Output files resulting from HingeProt were uploaded into HADDOCK. Our results predict new salt bridges in the complex and emphasize on the role of salt bridges to maintain an inverted V structure of IR. Having an inverted V structure leads to activate intracellular signaling pathway. In addition to presence salt bridges to form a convenient structure of IR, the importance of α-chain of carboxyl terminal (α-CT) to interact with insulin was surveyed and also foretokened new insulin/IR contacts, particularly at site 2 (rigid parts 2 and 3). Finally, several conformational changes in residues Asn711-Val715 of α-CT were occurred, we suggest that α-CT is a suitable situation relative to insulin due to these conformational alterations.
Nakagawa, Motoo; Hara, Masaki; Sakurai, Keita; Asano, Miki; Shibamoto, Yuta; Ohashi, Kazuya
2011-01-01
Improved time resolution using dual-source computed tomography (DSCT) enabled adaptation of electrocardiography (ECG)-gated cardiac CT for children with a high heart rate. In this study, we evaluated the ability of ECG-gated DSCT (ECG-DSCT) to depict the morphological ventricular features in patients with congenital heart disease (CHD). Between August 2006 and March 2010, a total of 66 patients with CHD (aged 1 day to 9 years, median 11 months) were analyzed using ECG-DSCT. The type of anomaly was ventricular septal defect (VSD) in 32 (malaligned type in 20, perimembranous type in 7, supracristal type in 3, muscular type in 2), single ventricle (SV) in 11, and corrected transposition of the great arteries (cTGA) in 3. All patients underwent ECG-DSCT and ultrasonography (US). We evaluated the accuracy of diagnosing the type of VSD. For the cases with SV and cTGA, we evaluated the ability to depict anatomical ventricular features. In all 32 cases of VSD, DSCT could confirm the VSD defects, and the findings were identical to those obtained by US. Anatomical configurations of the SV and cTGA were correctly diagnosed, similar to that on US. Our study suggests that ECG-DSCT can clearly depict the configuration of ventricles. (author)
Zobor, E.
1978-12-01
The approach chosen is based on the hierarchical control systems theory, however, the fundamentals of other approaches such as the systems simplification and systems partitioning are briefly summarized for introducing the problems associated with the control of large scale systems. The concept of a hierarchical control system acting in broad variety of operating conditions is developed and some practical extensions to the hierarchical control system approach e.g. subsystems measured and controlled with different rates, control of the partial state vector, coordination for autoregressive models etc. are given. Throughout the work the WWR-SM research reactor of the Institute has been taken as a guiding example and simple methods for the identification of the model parameters from a reactor start-up are discussed. Using the PROHYS digital simulation program elaborated in the course of the present research, detailed simulation studies were carried out for investigating the performance of a control system based on the concept and algorithms developed. In order to give a real application evidence, a short description is finally given about the closed-loop computer control system installed - in the framework of a project supported by the Hungarian State Office for Technical Development - at the WWR-SM research reactor where the results obtained in the present IAEA Research Contract were successfully applied and furnished the expected high performance
Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu
2009-06-01
With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.
McBride, Bonnie J.; Gordon, Sanford
1996-01-01
This users manual is the second part of a two-part report describing the NASA Lewis CEA (Chemical Equilibrium with Applications) program. The program obtains chemical equilibrium compositions of complex mixtures with applications to several types of problems. The topics presented in this manual are: (1) details for preparing input data sets; (2) a description of output tables for various types of problems; (3) the overall modular organization of the program with information on how to make modifications; (4) a description of the function of each subroutine; (5) error messages and their significance; and (6) a number of examples that illustrate various types of problems handled by CEA and that cover many of the options available in both input and output. Seven appendixes give information on the thermodynamic and thermal transport data used in CEA; some information on common variables used in or generated by the equilibrium module; and output tables for 14 example problems. The CEA program was written in ANSI standard FORTRAN 77. CEA should work on any system with sufficient storage. There are about 6300 lines in the source code, which uses about 225 kilobytes of memory. The compiled program takes about 975 kilobytes.
Computer scientist looks at reliability computations
Rosenthal, A.
1975-01-01
Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored
Konstantinos Kateros
2018-01-01
Full Text Available Background: Tibial plateau fractures are common due to high energy injuries. The principles of treatment include respect for the soft tissues, restoring the congruity of the articular surface and reduction of the anatomic alignment of the lower limb to enable early movement of the knee joint. There are various surgical fixation methods that can achieve these principles of treatment. Recognition of the particular fracture pattern is important, as this guides the surgical approach required in order to adequately stabilize the fracture. This study evaluates the results of the combined treatment of external fixator and limited internal fixation along with the advantages using postoperative computed tomography (CT scan after implant removal. Materials and Methods: 55 patients with a mean age of 42 years (range 17–65 years with tibial plateau fracture, were managed in our institution between October 2010 and September 2013., Twenty fractures were classified as Schatzker VI and 35 as Schatzker V. There were 8 open fractures (2 Gustilo Anderson 3A and 6 Gustilo Anderson 2. All fractures were treated with closed reduction and hybrid external fixation (n = 21/38.2% or with minimal open reduction internal fixation and a hybrid system (n = 34/61.8%. After the removal of the fixators, CT-scan was programmed for all the cases, for correlation with the results. At final followup, the American Knee Society Score (AKSS was administered. Results: All patients were evaluated with a minimum of 12 months (range 12–21 months followup. Average time to union was 15.5 weeks (range 13–19 weeks. The postoperative joint congruity as evaluated in the postoperative CT-scan was 5° in 19 cases (35%. Patients with residual joint depression 4.5 mm displayed a 100% chance of getting poor-fair scores both in AKSS knee and AKSS function score. The association of a postoperative mechanical axis within 5° of the contralateral limb and improved knee scores was statistically
Corrada, Dario; Soshilov, Anatoly A; Denison, Michael S; Bonati, Laura
2016-06-01
The Aryl hydrocarbon Receptor (AhR) is a transcription factor that mediates the biochemical response to xenobiotics and the toxic effects of a number of environmental contaminants, including dioxins. Recently, endogenous regulatory roles for the AhR in normal physiology and development have also been reported, thus extending the interest in understanding its molecular mechanisms of activation. Since dimerization with the AhR Nuclear Translocator (ARNT) protein, occurring through the Helix-Loop-Helix (HLH) and PER-ARNT-SIM (PAS) domains, is needed to convert the AhR into its transcriptionally active form, deciphering the AhR:ARNT dimerization mode would provide insights into the mechanisms of AhR transformation. Here we present homology models of the murine AhR:ARNT PAS domain dimer developed using recently available X-ray structures of other bHLH-PAS protein dimers. Due to the different reciprocal orientation and interaction surfaces in the different template dimers, two alternative models were developed for both the PAS-A and PAS-B dimers and they were characterized by combining a number of computational evaluations. Both well-established hot spot prediction methods and new approaches to analyze individual residue and residue-pairwise contributions to the MM-GBSA binding free energies were adopted to predict residues critical for dimer stabilization. On this basis, a mutagenesis strategy for both the murine AhR and ARNT proteins was designed and ligand-dependent DNA binding ability of the AhR:ARNT heterodimer mutants was evaluated. While functional analysis disfavored the HIF2α:ARNT heterodimer-based PAS-B model, most mutants derived from the CLOCK:BMAL1-based AhR:ARNT dimer models of both the PAS-A and the PAS-B dramatically decreased the levels of DNA binding, suggesting this latter model as the most suitable for describing AhR:ARNT dimerization. These novel results open new research directions focused at elucidating basic molecular mechanisms underlying the
Dario Corrada
2016-06-01
Full Text Available The Aryl hydrocarbon Receptor (AhR is a transcription factor that mediates the biochemical response to xenobiotics and the toxic effects of a number of environmental contaminants, including dioxins. Recently, endogenous regulatory roles for the AhR in normal physiology and development have also been reported, thus extending the interest in understanding its molecular mechanisms of activation. Since dimerization with the AhR Nuclear Translocator (ARNT protein, occurring through the Helix-Loop-Helix (HLH and PER-ARNT-SIM (PAS domains, is needed to convert the AhR into its transcriptionally active form, deciphering the AhR:ARNT dimerization mode would provide insights into the mechanisms of AhR transformation. Here we present homology models of the murine AhR:ARNT PAS domain dimer developed using recently available X-ray structures of other bHLH-PAS protein dimers. Due to the different reciprocal orientation and interaction surfaces in the different template dimers, two alternative models were developed for both the PAS-A and PAS-B dimers and they were characterized by combining a number of computational evaluations. Both well-established hot spot prediction methods and new approaches to analyze individual residue and residue-pairwise contributions to the MM-GBSA binding free energies were adopted to predict residues critical for dimer stabilization. On this basis, a mutagenesis strategy for both the murine AhR and ARNT proteins was designed and ligand-dependent DNA binding ability of the AhR:ARNT heterodimer mutants was evaluated. While functional analysis disfavored the HIF2α:ARNT heterodimer-based PAS-B model, most mutants derived from the CLOCK:BMAL1-based AhR:ARNT dimer models of both the PAS-A and the PAS-B dramatically decreased the levels of DNA binding, suggesting this latter model as the most suitable for describing AhR:ARNT dimerization. These novel results open new research directions focused at elucidating basic molecular mechanisms
Automated validation of a computer operating system
Dervage, M. M.; Milberg, B. A.
1970-01-01
Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.
Increasing complexity with quantum physics.
Anders, Janet; Wiesner, Karoline
2011-09-01
We argue that complex systems science and the rules of quantum physics are intricately related. We discuss a range of quantum phenomena, such as cryptography, computation and quantum phases, and the rules responsible for their complexity. We identify correlations as a central concept connecting quantum information and complex systems science. We present two examples for the power of correlations: using quantum resources to simulate the correlations of a stochastic process and to implement a classically impossible computational task.
A. A. Frolov
2016-01-01
Full Text Available Background: Rehabilitation of patients with poststroke motor disorders with the use of a brain-computer interface (BCI+exoskeleton may raise the rehabilitation to a new high-tech level and allow for an effective correction of the post-stroke dysfunction. Aim: To assess the efficacy of BCI+exoskeleton procedures for neurorehabilitation of patients with post-stroke motor dysfunction. Materials and methods: The study included 40 patients with a history of cerebral stroke (mean age 59±10.4 years, 26 male and 14 female. Thirty six of them had had an ischemic stroke and 4, a hemorrhagic stroke from 2 months to 4 years before the study entry. All patients had a various degree post-stroke hemiparesis predominantly of the arm. The main group patients (n=20, in addition to conventional therapy, had 10 sessions (3 times daily of BCI+exoskeleton. The BCI recognized the hand ungripping imagined by the patient and, by a feedback signal, the exoskeleton exerted the passive movement in the paretic arm. The control group patients (n=10 had 10 BCI+exoskeleton sessions without imaginary movements, and the exoskeleton functioned in a random mode. The comparison group included 10 patients who received only standard treatment. Results: At the end of rehabilitation treatment (day 14, all study groups demonstrated an improvement in the function of the paretic extremity. There was an improvement of functioning and daily activities in the main group, compared to the control and the comparison groups: the change in the modified Rankin scale score was 0.4±0.1, 0.1±0.1 and 0±0.2 (p<0.05, in the Bartel scale score, 5.6±0.8, 2.3±0.3 and 1±0.2 (p<0.001, respectively. In the BCI+exoskeleton group the motor function of the paretic arm assessed by the ARAT scale, improved by 5.5±1.3 points (2.4±0.6 points in the control group and 1.9±0.7 in the comparison group, р<0.05, and as assessed by the Fugl-Meyer scale, by 10.8±1.5 points (3.8
Erdi, Peter
2008-01-01
This book explains why complex systems research is important in understanding the structure, function and dynamics of complex natural and social phenomena. Readers will learn the basic concepts and methods of complex system research.
Kim, Bong Gon; Kim, Jae Sang; Kim, Jin Eun; Lee, Boo Yeon
2006-06-01
This book introduces complex chemistry with ten chapters, which include development of complex chemistry on history coordination theory and Warner's coordination theory and new development of complex chemistry, nomenclature on complex with conception and define, chemical formula on coordination compound, symbol of stereochemistry, stereo structure and isomerism, electron structure and bond theory on complex, structure of complex like NMR and XAFS, balance and reaction on solution, an organo-metallic chemistry, biology inorganic chemistry, material chemistry of complex, design of complex and calculation chemistry.
Computers in writing instruction
Schwartz, Helen J.; van der Geest, Thea; Smit-Kreuzen, Marlies
1992-01-01
For computers to be useful in writing instruction, innovations should be valuable for students and feasible for teachers to implement. Research findings yield contradictory results in measuring the effects of different uses of computers in writing, in part because of the methodological complexity of
Steane, Andrew
1998-01-01
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from
Steane, Andrew [Department of Atomic and Laser Physics, University of Oxford, Clarendon Laboratory, Oxford (United Kingdom)
1998-02-01
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-01
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-01-01
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔE AB . Counterpoise-corrected interaction energies ΔE AB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A−B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [E MP2/CBS ] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔE CC-MP , a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔE AB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the
Maranzana, Andrea, E-mail: andrea.maranzana@unito.it, E-mail: anna.giordana@hotmail.com, E-mail: vincenzo.barone@sns.it, E-mail: mauro.causa@unina.it, E-mail: mipavone@unina.it; Giordana, Anna, E-mail: andrea.maranzana@unito.it, E-mail: anna.giordana@hotmail.com, E-mail: vincenzo.barone@sns.it, E-mail: mauro.causa@unina.it, E-mail: mipavone@unina.it; Indarto, Antonius, E-mail: antonius.indarto@che.itb.ac.id; Tonachini, Glauco, E-mail: glauco.tonachini@unito.it [Dipartimento di Chimica, Università di Torino, Corso Massimo D’Azeglio 48, I-10125 Torino (Italy); Barone, Vincenzo, E-mail: andrea.maranzana@unito.it, E-mail: anna.giordana@hotmail.com, E-mail: vincenzo.barone@sns.it, E-mail: mauro.causa@unina.it, E-mail: mipavone@unina.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126, Pisa (Italy); Causà, Mauro, E-mail: andrea.maranzana@unito.it, E-mail: anna.giordana@hotmail.com, E-mail: vincenzo.barone@sns.it, E-mail: mauro.causa@unina.it, E-mail: mipavone@unina.it [Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università di Napoli “Federico II,” Via Cintia, 80126 Napoli (Italy); Pavone, Michele, E-mail: andrea.maranzana@unito.it, E-mail: anna.giordana@hotmail.com, E-mail: vincenzo.barone@sns.it, E-mail: mauro.causa@unina.it, E-mail: mipavone@unina.it [Dipartimento di Scienze Chimiche, Università di Napoli “Federico II,” Complesso Universitario di Monte Sant’Angelo, Via Cintia, I-80126 Napoli (Italy)
2013-12-28
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔE{sub AB}. Counterpoise-corrected interaction energies ΔE{sub AB} are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A−B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [E{sub MP2/CBS}] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔE{sub CC-MP}, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔE{sub AB} with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting
McKenna, Thomas M; Zornetzer, Steven F
1992-01-01
This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n
Pastrnak, J.W.
1986-01-01
This eighteen-month study has been successful in providing the designer and analyst with qualitative guidelines on the occurrence of complex modes in the dynamics of linear structures, and also in developing computer codes for determining quantitatively which vibration modes are complex and to what degree. The presence of complex modes in a test structure has been verified. Finite element analysis of a structure with non-proportional dumping has been performed. A partial differential equation has been formed to eliminate possible modeling errors
Quantum Computing and the Limits of the Efficiently Computable
CERN. Geneva
2015-01-01
I'll discuss how computational complexity---the study of what can and can't be feasibly computed---has been interacting with physics in interesting and unexpected ways. I'll first give a crash course about computer science's P vs. NP problem, as well as about the capabilities and limits of quantum computers. I'll then touch on speculative models of computation that would go even beyond quantum computers, using (for example) hypothetical nonlinearities in the Schrodinger equation. Finally, I'll discuss BosonSampling ---a proposal for a simple form of quantum computing, which nevertheless seems intractable to simulate using a classical computer---as well as the role of computational complexity in the black hole information puzzle.
Blackwell, Kim L
2014-01-01
Progress in Molecular Biology and Translational Science provides a forum for discussion of new discoveries, approaches, and ideas in molecular biology. It contains contributions from leaders in their fields and abundant references. This volume brings together different aspects of, and approaches to, molecular and multi-scale modeling, with applications to a diverse range of neurological diseases. Mathematical and computational modeling offers a powerful approach for examining the interaction between molecular pathways and ionic channels in producing neuron electrical activity. It is well accepted that non-linear interactions among diverse ionic channels can produce unexpected neuron behavior and hinder a deep understanding of how ion channel mutations bring about abnormal behavior and disease. Interactions with the diverse signaling pathways activated by G protein coupled receptors or calcium influx adds an additional level of complexity. Modeling is an approach to integrate myriad data sources into a cohesiv...
Shapes of interacting RNA complexes
Fu, Benjamin Mingming; Reidys, Christian
2014-01-01
Shapes of interacting RNA complexes are studied using a filtration via their topological genus. A shape of an RNA complex is obtained by (iteratively) collapsing stacks and eliminating hairpin loops.This shape-projection preserves the topological core of the RNA complex and for fixed topological...... genus there are only finitely many such shapes. Our main result is a new bijection that relates the shapes of RNA complexes with shapes of RNA structures. This allows to compute the shape polynomial of RNA complexes via the shape polynomial of RNA structures. We furthermore present a linear time uniform...... sampling algorithm for shapes of RNA complexes of fixed topological genus....
Measuring Complexity of SAP Systems
Ilja Holub
2016-10-01
Full Text Available The paper discusses the reasons of complexity rise in ERP system SAP R/3. It proposes a method for measuring complexity of SAP. Based on this method, the computer program in ABAP for measuring complexity of particular SAP implementation is proposed as a tool for keeping ERP complexity under control. The main principle of the measurement method is counting the number of items or relations in the system. The proposed computer program is based on counting of records in organization tables in SAP.
Alexander V Maltsev
2017-08-01
Full Text Available Intracellular Local Ca releases (LCRs from sarcoplasmic reticulum (SR regulate cardiac pacemaker cell function by activation of electrogenic Na/Ca exchanger (NCX during diastole. Prior studies demonstrated the existence of powerful compensatory mechanisms of LCR regulation via a complex local cross-talk of Ca pump, release and NCX. One major obstacle to study these mechanisms is that LCR exhibit complex Ca release propagation patterns (including merges and separations that have not been characterized. Here we developed new terminology, classification, and computer algorithms for automatic detection of numerically simulated LCRs and examined LCR regulation by SR Ca pumping rate (Pup that provides a major contribution to fight-or-flight response. In our simulations the faster SR Ca pumping accelerates action potential-induced Ca transient decay and quickly clears Ca under the cell membrane in diastole, preventing premature releases. Then the SR generates an earlier, more synchronized, and stronger diastolic LCR signal activating an earlier and larger inward NCX current. LCRs at higher Pup exhibit larger amplitudes and faster propagation with more collisions to each other. The LCRs overlap with Ca transient decay, causing an elevation of the average diastolic [Ca] nadir to ~200 nM (at Pup = 24 mM/s. Background Ca (in locations lacking LCRs quickly decays to resting Ca levels (<100 nM at high Pup, but remained elevated during slower decay at low Pup. Release propagation is facilitated at higher Pup by a larger LCR amplitude, whereas at low Pup by higher background Ca. While at low Pup LCRs show smaller amplitudes, their larger durations and sizes combined with longer transient decay stabilize integrals of diastolic Ca and NCX current signals. Thus, the local interplay of SR Ca pump and release channels regulates LCRs and Ca transient decay to insure fail-safe pacemaker cell operation within a wide range of rates.