WorldWideScience

Sample records for kipt computed complex

  1. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  2. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  3. NSC KIPT Linux cluster for computing within the CMS physics program

    International Nuclear Information System (INIS)

    Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.

    2002-01-01

    The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out

  4. Development of Plasma Technologies at IPP NSC KIPT

    International Nuclear Information System (INIS)

    Tereshin, V.I.; Chebotarev, V.V.; Garkusha, I.E.; Lapshin, V.I.; Shvets, O.M.; Taran, V.S.

    2001-01-01

    Plasma Technologies in Institute of Plasma Physics of the NSC KIPT are recently developed in the following directions. Material surfaces modification under their irradiation with pulsed plasma streams of different working gases. Besides traditional analysis of improvements of tribological characteristics and structural-phase changes of the modified layers recently we started investigations of material corrosions characteristic improvement under influence of pulsed plasma on the material surfaces. As to the surface coatings in arc discharges of Bulat type devices, new trends are related with multi-layers coatings, using Ti-AI-N coatings in cutting tools, using high frequency discharges or combined HF- and arc discharges for increasing the nomenclature of goods to be coated. Development of ozonators is respectively new area for IPP NSC KIPT. On the base of barrier high-frequency discharge there were developed a number of high efficiency ozonators with ozone production up to 100 G/hour. (author)

  5. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX

    International Nuclear Information System (INIS)

    Gohar, Y.; Zhong, Z.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is ∼375 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the electrons and the

  6. Ukraine-Japanese-Swedish project: Upgrading of perimeter protection system at Kharkov Institute of Physics and Technology (KIPT)

    International Nuclear Information System (INIS)

    Mikahaylov, V.; Lapshin, V.; Ek, P.; Flyghed, L.; Nilsson, A.; Ooka, N.; Shimizu, K.; Tanuma, K.

    2001-01-01

    Full text: Since the Ukraine voluntarily accepted the status of a non-nuclear-weapons state and concluded a Safeguards Agreement with the IAEA, the Kharkov Institute of Physics and Technology (KIPT) as a nuclear facility using the nuclear material of category 1, has become a Ukrainian priority object for the international community's efforts to ensure nuclear non-proliferation measures and to bring the existing protection systems to the generally accepted security standards. In March 1996, at the meeting held under the auspices of the IAEA in Kiev, the representatives from Japan, Sweden and the USA agreed to provide technical assistance concerning improvement of the nuclear material accountancy and control and physical protection system (MPC and A) available at KIPT. The Technical Secretariat of the Japan-Ukraine Committee for Co-operation on Reducing Nuclear Weapons and Swedish Nuclear Power Inspectorate undertook to solve the most expensive and labour-consuming task namely, the upgrading of the perimeter protection system at KIPT. This included that the current perimeter system, comprising several kilometers, should be completely replaced. Besides the above-mentioned problem, the upgrading should be carried out with the institute in operation. Thus, it was not allowed to replace the existing protection system by a new one unless KIPT was constantly protected. This required the creation of a new protected zone that to a large extent was occupied by the communication equipment, buildings, trees and other objects interfering with the work. All these difficulties required very comprehensive development of the project design as well as a great deal of flexibility during the implementation of the project. These problems were all successfully resolved thanks to a well working project organization, composed of experts from KIPT, JAERI and ANS, involving the highly qualified Swedish technical experts who played a leading role. In the framework of implementation of the

  7. Accelerator shield design of KIPT neutron source facility

    International Nuclear Information System (INIS)

    Zhong, Z.; Gohar, Y.

    2013-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generated by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total

  8. MCNPX and MCB coupled methodology for the burnup calculation of the KIPT accelerator driven subcritical system

    International Nuclear Information System (INIS)

    Zhong, Z.; Gohar, Y.; Talamo, A.

    2009-01-01

    Argonne National Laboratory (ANL) of USA and Kharkov Inst. of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical facility (ADS). The facility will be utilized for basic research, medical isotopes production, and training young nuclear specialists. The burnup methodology and analysis of the KIPT ADS are presented in this paper. MCNPX and MCB Monte Carlo computer codes have been utilized. MCNPX has the capability of performing electron, photon and neutron coupled transport problems, but it lacks the burnup capability for driven subcritical systems. MCB has the capability for performing the burnup calculation of driven subcritical systems, while it cannot transport electrons. A calculational methodology coupling MCNPX and MCB has been developed, which can exploit the electrons transport capability of MCNPX for neutron production and the burnup capability of MCB for driven subcritical systems. In this procedure, a neutron source file is generated using MCNPX transport calculation, preserving the neutrons yield from photonuclear reactions initiated by electrons, and this source file is utilized by MCB for the burnup analyses with the same geometrical model. In this way, the ADS depletion calculation can be accurately. (authors)

  9. Accelerating complex for basic researches in the nuclear physics

    NARCIS (Netherlands)

    Dovbnya, A.N.; Guk, I.S.; Kononenko, S.G.; Peev, F.A.; Tarasenko, A.S.; Botman, J.I.M.

    2009-01-01

    In 2003 in NSC KIPT was begun the work on development the project of accelerator, base facility IHEPNP NSC KIPT electron recirculator SALO. The accelerator will be disposed in target hall of accelerator LU 2000 complex. It is projected first of all as facility for basic researches in the field of

  10. Analysis of fuel management in the KIPT neutron source facility

    Energy Technology Data Exchange (ETDEWEB)

    Zhong Zhaopeng, E-mail: zzhong@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Gohar, Yousry; Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)

    2011-05-15

    Research highlights: > Fuel management of KIPT ADS was analyzed. > Core arrangement was shuffled in stage wise. > New fuel assemblies was added into core periodically. > Beryllium reflector could also be utilized to increase the fuel life. - Abstract: Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an experimental neutron source facility consisting of an electron accelerator driven sub-critical assembly. The neutron source driving the sub-critical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The sub-critical assembly surrounding the target is fueled with low enriched WWR-M2 type hexagonal fuel assemblies. The U-235 enrichment of the fuel material is <20%. The facility will be utilized for basic and applied research, producing medical isotopes, and training young specialists. With the 100 KW electron beam power, the total thermal power of the facility is {approx}360 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products continuously reduce the system reactivity during the operation, decrease the neutron flux level, and consequently impact the facility performance. To preserve the neutron flux level during the operation, the fuel assemblies should be added and shuffled for compensating the lost reactivity caused by burnup. Beryllium reflector could also be utilized to increase the fuel life time in the sub-critical core. This paper studies the fuel cycles and shuffling schemes of the fuel assemblies of the sub-critical assembly to preserve the system reactivity and the neutron flux level during the operation.

  11. Analysis of fuel management in the KIPT neutron source facility

    International Nuclear Information System (INIS)

    Zhong Zhaopeng; Gohar, Yousry; Talamo, Alberto

    2011-01-01

    Research highlights: → Fuel management of KIPT ADS was analyzed. → Core arrangement was shuffled in stage wise. → New fuel assemblies was added into core periodically. → Beryllium reflector could also be utilized to increase the fuel life. - Abstract: Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an experimental neutron source facility consisting of an electron accelerator driven sub-critical assembly. The neutron source driving the sub-critical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The sub-critical assembly surrounding the target is fueled with low enriched WWR-M2 type hexagonal fuel assemblies. The U-235 enrichment of the fuel material is <20%. The facility will be utilized for basic and applied research, producing medical isotopes, and training young specialists. With the 100 KW electron beam power, the total thermal power of the facility is ∼360 kW including the fission power of ∼260 kW. The burnup of the fissile materials and the buildup of fission products continuously reduce the system reactivity during the operation, decrease the neutron flux level, and consequently impact the facility performance. To preserve the neutron flux level during the operation, the fuel assemblies should be added and shuffled for compensating the lost reactivity caused by burnup. Beryllium reflector could also be utilized to increase the fuel life time in the sub-critical core. This paper studies the fuel cycles and shuffling schemes of the fuel assemblies of the sub-critical assembly to preserve the system reactivity and the neutron flux level during the operation.

  12. NSC KIPT's experience in production of absorber materials, composites and products for control mechanisms of various nuclear reactor types

    International Nuclear Information System (INIS)

    Odeychuk, N.P.; Zelensky, V.F.; Gurin, V.A.; Konotop, Yu.F.

    2000-01-01

    Data on NSC KIPT developments of absorber composites B 4 C-PyC and B 4 C-SiC are reported. Results of pre-reactor studies and reactor tests of absorber composites developed are given. It is shown that the B 4 C-PyC composites have a high radiation resistance at temperatures up to 1,250 deg C. (author)

  13. Dose Consumptions of NSC KIPT Personnel during the Work that Includes Uranium

    International Nuclear Information System (INIS)

    Kurilo, Yu.P.; Mazilov, A.V.; Razsukovanny, B.N.

    2007-01-01

    The analysis of mean annual and maximal annual uranium concentrations in the air of NSC KIPT working premises is carried out in this paper; the high limit of possible external radiation of category A personnel engaged on works with uranium since 1961 to 2003 has been calculated. The numerical values of effective doses of internal and external radiation of personnel are determined on the basis of data acquired. It is shown that despite of breaking the principal of non excess that has taken place to be in the some years the mean value of total (external+internal) radiation during the given period of time did not exceed the effective dose limit established in Norms of radiating safety of Ukraine and that is equal to 20 mSv per year

  14. Numerical studies of the flux-to-current ratio method in the KIPT neutron source facility

    International Nuclear Information System (INIS)

    Cao, Y.; Gohar, Y.; Zhong, Z.

    2013-01-01

    The reactivity of a subcritical assembly has to be monitored continuously in order to assure its safe operation. In this paper, the flux-to-current ratio method has been studied as an approach to provide the on-line reactivity measurement of the subcritical system. Monte Carlo numerical simulations have been performed using the KIPT neutron source facility model. It is found that the reactivity obtained from the flux-to-current ratio method is sensitive to the detector position in the subcritical assembly. However, if multiple detectors are located about 12 cm above the graphite reflector and 54 cm radially, the technique is shown to be very accurate in determining the k eff this facility in the range of 0.75 to 0.975. (authors)

  15. Medical Isotope Production Analyses In KIPT Neutron Source Facility

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Yousry

    2016-01-01

    Medical isotope production analyses in Kharkov Institute of Physics and Technology (KIPT) neutron source facility were performed to include the details of the irradiation cassette and the self-shielding effect. An updated detailed model of the facility was used for the analyses. The facility consists of an accelerator-driven system (ADS), which has a subcritical assembly using low-enriched uranium fuel elements with a beryllium-graphite reflector. The beryllium assemblies of the reflector have the same outer geometry as the fuel elements, which permits loading the subcritical assembly with different number of fuel elements without impacting the reflector performance. The subcritical assembly is driven by an external neutron source generated from the interaction of 100-kW electron beam with a tungsten target. The facility construction was completed at the end of 2015, and it is planned to start the operation during the year of 2016. It is the first ADS in the world, which has a coolant system for removing the generated fission power. Argonne National Laboratory has developed the design concept and performed extensive design analyses for the facility including its utilization for the production of different radioactive medical isotopes. 99 Mo is the parent isotope of 99m Tc, which is the most commonly used medical radioactive isotope. Detailed analyses were performed to define the optimal sample irradiation location and the generated activity, for several radioactive medical isotopes, as a function of the irradiation time.

  16. Medical Isotope Production Analyses In KIPT Neutron Source Facility

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Medical isotope production analyses in Kharkov Institute of Physics and Technology (KIPT) neutron source facility were performed to include the details of the irradiation cassette and the self-shielding effect. An updated detailed model of the facility was used for the analyses. The facility consists of an accelerator-driven system (ADS), which has a subcritical assembly using low-enriched uranium fuel elements with a beryllium-graphite reflector. The beryllium assemblies of the reflector have the same outer geometry as the fuel elements, which permits loading the subcritical assembly with different number of fuel elements without impacting the reflector performance. The subcritical assembly is driven by an external neutron source generated from the interaction of 100-kW electron beam with a tungsten target. The facility construction was completed at the end of 2015, and it is planned to start the operation during the year of 2016. It is the first ADS in the world, which has a coolant system for removing the generated fission power. Argonne National Laboratory has developed the design concept and performed extensive design analyses for the facility including its utilization for the production of different radioactive medical isotopes. 99Mo is the parent isotope of 99mTc, which is the most commonly used medical radioactive isotope. Detailed analyses were performed to define the optimal sample irradiation location and the generated activity, for several radioactive medical isotopes, as a function of the irradiation time.

  17. Production of medical radioactive isotopes using KIPT electron driven subcritical facility

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Yousry

    2008-01-01

    Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, γ), (n, 2n), (n, p), and (γ, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope

  18. Production of medical radioactive isotopes using KIPT electron driven subcritical facility

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: alby@anl.gov; Gohar, Yousry [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)

    2008-05-15

    Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, {gamma}), (n, 2n), (n, p), and ({gamma}, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.

  19. Production of medical radioactive isotopes using KIPT electron driven subcritical facility.

    Science.gov (United States)

    Talamo, Alberto; Gohar, Yousry

    2008-05-01

    Kharkov Institute of Physics and Technology (KIPT) of Ukraine in collaboration with Argonne National Laboratory (ANL) has a plan to construct an electron accelerator driven subcritical assembly. One of the facility objectives is the production of medical radioactive isotopes. This paper presents the ANL collaborative work performed for characterizing the facility performance for producing medical radioactive isotopes. First, a preliminary assessment was performed without including the self-shielding effect of the irradiated samples. Then, more detailed investigation was carried out including the self-shielding effect, which defined the sample size and location for producing each medical isotope. In the first part, the reaction rates were calculated as the multiplication of the cross section with the unperturbed neutron flux of the facility. Over fifty isotopes have been considered and all transmutation channels are used including (n, gamma), (n, 2n), (n, p), and (gamma, n). In the second part, the parent isotopes with high reaction rate were explicitly modeled in the calculations. Four irradiation locations were considered in the analyses to study the medical isotope production rate. The results show the self-shielding effect not only reduces the specific activity but it also changes the irradiation location that maximizes the specific activity. The axial and radial distributions of the parent capture rates have been examined to define the irradiation sample size of each parent isotope.

  20. KIPT accelerator-driven system design and performance

    International Nuclear Information System (INIS)

    Gohar, Y.; Bolshinsky, I.; Karnaukhov, I.

    2015-01-01

    Argonne National Laboratory (ANL) of the US is collaborating with the Kharkov Institute of Physics and Technology (KIPT) of Ukraine to develop and construct a neutron source facility. The facility is planned to produce medical isotopes, train young nuclear professionals, support Ukraine's nuclear industry and provide capability to perform reactor physics, material research, and basic science experiments. It consists of a subcritical assembly with low-enriched uranium fuel driven with an electron accelerator. The target design utilises tungsten or natural uranium for neutron production through photonuclear reactions from the Bremsstrahlung radiation generated by 100-MeV electrons. The accelerator electron beam power is 100 KW. The neutron source intensity, spectrum, and spatial distribution have been studied as a function of the electron beam parameters to maximise the neutron yield and satisfy different engineering requirements. Physics, thermal-hydraulics, and thermal-stress analyses were performed and iterated to maximise the neutron source strength and to minimise the maximum temperature and the thermal stress in the target materials. The subcritical assembly is designed to obtain the highest possible neutron flux intensity with an effective neutron multiplication factor of <0.98. Different fuel and reflector materials are considered for the subcritical assembly design. The mechanical design of the facility has been developed to maximise its utility and minimise the time for replacing the target, fuel, and irradiation cassettes by using simple and efficient procedures. Shielding analyses were performed to define the dose map around the facility during operation as a function of the heavy concrete shield thickness. Safety, reliability and environmental considerations are included in the facility design. The facility is configured to accommodate future design upgrades and new missions. In addition, it has unique features relative to the other international

  1. Computational error and complexity in science and engineering computational error and complexity

    CERN Document Server

    Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K

    2005-01-01

    The book "Computational Error and Complexity in Science and Engineering” pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...

  2. Advances in computational complexity theory

    CERN Document Server

    Cai, Jin-Yi

    1993-01-01

    This collection of recent papers on computational complexity theory grew out of activities during a special year at DIMACS. With contributions by some of the leading experts in the field, this book is of lasting value in this fast-moving field, providing expositions not found elsewhere. Although aimed primarily at researchers in complexity theory and graduate students in mathematics or computer science, the book is accessible to anyone with an undergraduate education in mathematics or computer science. By touching on some of the major topics in complexity theory, this book sheds light on this burgeoning area of research.

  3. Bioinspired computation in combinatorial optimization: algorithms and their computational complexity

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2012-01-01

    Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorials...... problems. Classical single objective optimization is examined first. They then investigate the computational complexity of bioinspired computation applied to multiobjective variants of the considered combinatorial optimization problems, and in particular they show how multiobjective optimization can help...... to speed up bioinspired computation for single-objective optimization problems. The tutorial is based on a book written by the authors with the same title. Further information about the book can be found at www.bioinspiredcomputation.com....

  4. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  5. High-power plasma dynamic systems of quasi-stationary type in IPP NSK KIPT: results and prospects

    International Nuclear Information System (INIS)

    Solyakov, D.G.

    2015-01-01

    This paper is devoted to brief review of main experimental results of investigations of high-power quasi-stationary plasma dynamic systems in the IPP NSC KIPT. In experiments were shown that to received accelerated plasma streams with high value of energy in quasi-stationary modes all conditions on the accelerating channel boundary should be controlled independently. As a results of optimizations of the modes of operation all QSPA active elements quasi-stationary plasma flow in the channel during 480 μs at discharge durations 550μs was obtained. The plasma streams velocity was close to theoretical limit for present experimental conditions. Plasma streams with maximum velocity up to 4.2 · 10 7 cm/s and total value of energy containment in the stream 0.4...0.6 MJ were received. The main properties of compression zone formation in the plasma streams generated by magneto-plasma compressor in quasi-stationary modes were investigated. In experiments were shown that initial conditions, namely residual pressure in the vacuum chamber made a big influence on the value of plasma density in compression zone. Compressive plasma streams with density (2...4)·10 18 cm -3 during 20...25μs at discharge duration 10μs were obtained. This value of plasma density is close to theoretical limit for present experimental conditions

  6. Nature, computation and complexity

    International Nuclear Information System (INIS)

    Binder, P-M; Ellis, G F R

    2016-01-01

    The issue of whether the unfolding of events in the world can be considered a computation is explored in this paper. We come to different conclusions for inert and for living systems (‘no’ and ‘qualified yes’, respectively). We suggest that physical computation as we know it exists only as a tool of complex biological systems: us. (paper)

  7. Computability, complexity, and languages fundamentals of theoretical computer science

    CERN Document Server

    Davis, Martin D; Rheinboldt, Werner

    1983-01-01

    Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa

  8. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  9. On the complexity of computing two nonlinearity measures

    DEFF Research Database (Denmark)

    Find, Magnus Gausdal

    2014-01-01

    We study the computational complexity of two Boolean nonlinearity measures: the nonlinearity and the multiplicative complexity. We show that if one-way functions exist, no algorithm can compute the multiplicative complexity in time 2O(n) given the truth table of length 2n, in fact under the same ...

  10. Computational Complexity and Human Decision-Making.

    Science.gov (United States)

    Bossaerts, Peter; Murawski, Carsten

    2017-12-01

    The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Computational complexity of the landscape II-Cosmological considerations

    Science.gov (United States)

    Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire

    2018-05-01

    We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.

  12. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  13. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  14. Implicit computational complexity and compilers

    DEFF Research Database (Denmark)

    Rubiano, Thomas

    Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non...... evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages...

  15. Computations, Complexity, Experiments, and the World Outside Physics

    International Nuclear Information System (INIS)

    Kadanoff, L.P

    2009-01-01

    Computer Models in the Sciences and Social Sciences. 1. Simulation and Prediction in Complex Systems: the Good the Bad and the Awful. This lecture deals with the history of large-scale computer modeling mostly in the context of the U.S. Department of Energy's sponsorship of modeling for weapons development and innovation in energy sources. 2. Complexity: Making a Splash-Breaking a Neck - The Making of Complexity in Physical System. For ages thinkers have been asking how complexity arise. The laws of physics are very simple. How come we are so complex? This lecture tries to approach this question by asking how complexity arises in physical fluids. 3. Forrester, et. al. Social and Biological Model-Making The partial collapse of the world's economy has raised the question of whether we could improve the performance of economic and social systems by a major effort on creating understanding via large-scale computer models. (author)

  16. Use of bar-code technology in MC and A system

    International Nuclear Information System (INIS)

    Mykhaylov, V.; Odeychuk, N.; Tovkanetz, V.; Lapshin, V.; Ewing, T.

    2001-01-01

    Full text: Significant problem during the treatment with nuclear materials is the usage of reliable, rapid, integrant automated systems of nuclear material control and account. Thus the dose loading of attending technical personnel is essentially reduced. One of the directions of the solution of the indicated problems is the usage of bar-code technology. Such integrated system should include protection of materials, measuring of materials, and record of materials and drawing up of an inventory list. Especially it is important for the enterprises, on which the enriched uranium and other nuclear materials, which are under IAEA warranties, are utilized. According to US assistance program in the field of MC and A, NSC KIPT has been received indispensable equipment and software, including equipment of nondestructive analysis and automated inventory material accounting system (AIMAS), which was intended for modernizing of nuclear material account system in NSC KIPT. The purpose of operations was estimation of generalized procedures on both MC and A and nondestructive analysis, and updating them so that they might obey the specific conditions of the Enterprise and demands of the Ukraine Regulatory Administration. In NSC KIPT, which is the largest nuclear and physics research center in Ukraine, the measures on enactment of bar-code technology for nuclear materials control and account with the usage of equipment and software of US leading firms (Intermec, Prodigy Max, Tharo Systems, Inc) have been conducting since 1999. During the introduction of this technology, it has been installed the software on nuclear material control and account (AIMAS data base), which was intended for this activities, on NSC KIPT computers. The structure of the NSC KIPT's facility has been determined according to demands of the State and IAEA demands. The key measuring points of inventory quantity has been determined in nuclear material balance zone and the concrete computers, on which is kept

  17. Introduction to the LaRC central scientific computing complex

    Science.gov (United States)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  18. Theories of computational complexity

    CERN Document Server

    Calude, C

    1988-01-01

    This volume presents four machine-independent theories of computational complexity, which have been chosen for their intrinsic importance and practical relevance. The book includes a wealth of results - classical, recent, and others which have not been published before.In developing the mathematics underlying the size, dynamic and structural complexity measures, various connections with mathematical logic, constructive topology, probability and programming theories are established. The facts are presented in detail. Extensive examples are provided, to help clarify notions and constructions. The lists of exercises and problems include routine exercises, interesting results, as well as some open problems.

  19. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  20. Ubiquitous Computing, Complexity and Culture

    DEFF Research Database (Denmark)

    environments, experience time, and develop identities individually and socially. Interviews with working media artists lend further perspectives on these cultural transformations. Drawing on cultural theory, new media art studies, human-computer interaction theory, and software studies, this cutting-edge book......The ubiquitous nature of mobile and pervasive computing has begun to reshape and complicate our notions of space, time, and identity. In this collection, over thirty internationally recognized contributors reflect on ubiquitous computing’s implications for the ways in which we interact with our...... critically unpacks the complex ubiquity-effects confronting us every day....

  1. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  2. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  3. Applications of Computer Technology in Complex Craniofacial Reconstruction

    Directory of Open Access Journals (Sweden)

    Kristopher M. Day, MD

    2018-03-01

    Conclusion:. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.

  4. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  5. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  6. Computability, complexity, logic

    CERN Document Server

    Börger, Egon

    1989-01-01

    The theme of this book is formed by a pair of concepts: the concept of formal language as carrier of the precise expression of meaning, facts and problems, and the concept of algorithm or calculus, i.e. a formally operating procedure for the solution of precisely described questions and problems. The book is a unified introduction to the modern theory of these concepts, to the way in which they developed first in mathematical logic and computability theory and later in automata theory, and to the theory of formal languages and complexity theory. Apart from considering the fundamental themes an

  7. Coherence and computational complexity of quantifier-free dependence logic formulas

    NARCIS (Netherlands)

    Kontinen, J.; Kontinen, J.; Väänänen, J.

    2010-01-01

    We study the computational complexity of the model checking for quantifier-free dependence logic (D) formulas. We point out three thresholds in the computational complexity: logarithmic space, non- deterministic logarithmic space and non-deterministic polynomial time.

  8. Computational complexity in entanglement transformations

    Science.gov (United States)

    Chitambar, Eric A.

    In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are

  9. Molecular computing towards a novel computing architecture for complex problem solving

    CERN Document Server

    Chang, Weng-Long

    2014-01-01

    This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...

  10. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  11. Automated System for Teaching Computational Complexity of Algorithms Course

    Directory of Open Access Journals (Sweden)

    Vadim S. Roublev

    2017-01-01

    Full Text Available This article describes problems of designing automated teaching system for “Computational complexity of algorithms” course. This system should provide students with means to familiarize themselves with complex mathematical apparatus and improve their mathematical thinking in the respective area. The article introduces the technique of algorithms symbol scroll table that allows estimating lower and upper bounds of computational complexity. Further, we introduce a set of theorems that facilitate the analysis in cases when the integer rounding of algorithm parameters is involved and when analyzing the complexity of a sum. At the end, the article introduces a normal system of symbol transformations that allows one both to perform any symbol transformations and simplifies the automated validation of such transformations. The article is published in the authors’ wording.

  12. Characterizations and computational complexity of systolic trellis automata

    Energy Technology Data Exchange (ETDEWEB)

    Ibarra, O H; Kim, S M

    1984-03-01

    Systolic trellis automata are simple models for VLSI. The authors characterize the computing power of these models in terms of turing machines. The characterizations are useful in proving new results as well as giving simpler proofs of known results. They also derive lower and upper bounds on the computational complexity of the models. 18 references.

  13. Computational complexity of Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Korshunov, Aleksei D [Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2012-02-28

    Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.

  14. VBOT: Motivating computational and complex systems fluencies with constructionist virtual/physical robotics

    Science.gov (United States)

    Berland, Matthew W.

    As scientists use the tools of computational and complex systems theory to broaden science perspectives (e.g., Bar-Yam, 1997; Holland, 1995; Wolfram, 2002), so can middle-school students broaden their perspectives using appropriate tools. The goals of this dissertation project are to build, study, evaluate, and compare activities designed to foster both computational and complex systems fluencies through collaborative constructionist virtual and physical robotics. In these activities, each student builds an agent (e.g., a robot-bird) that must interact with fellow students' agents to generate a complex aggregate (e.g., a flock of robot-birds) in a participatory simulation environment (Wilensky & Stroup, 1999a). In a participatory simulation, students collaborate by acting in a common space, teaching each other, and discussing content with one another. As a result, the students improve both their computational fluency and their complex systems fluency, where fluency is defined as the ability to both consume and produce relevant content (DiSessa, 2000). To date, several systems have been designed to foster computational and complex systems fluencies through computer programming and collaborative play (e.g., Hancock, 2003; Wilensky & Stroup, 1999b); this study suggests that, by supporting the relevant fluencies through collaborative play, they become mutually reinforcing. In this work, I will present both the design of the VBOT virtual/physical constructionist robotics learning environment and a comparative study of student interaction with the virtual and physical environments across four middle-school classrooms, focusing on the contrast in systems perspectives differently afforded by the two environments. In particular, I found that while performance gains were similar overall, the physical environment supported agent perspectives on aggregate behavior, and the virtual environment supported aggregate perspectives on agent behavior. The primary research questions

  15. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    Science.gov (United States)

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  16. Computer tomography in complex diagnosis of laryngeal cancer

    International Nuclear Information System (INIS)

    Savin, A.A.

    1999-01-01

    To specify the role of computer tomography in the diagnosis of malignant of the larynx. Forty-two patients with suspected laryngeal tumors were examined: 38 men and 4 women aged 41-68 years. X-ray examinations included traditional immediate tomography of the larynx. Main X-ray and computer tomographic symptoms of laryngeal tumors of different localizations are described. It is shown that the use of computer tomography in complex diagnosis of laryngeal cancer permits an objective assessment of the tumor, its structure and dissemination, and of the regional lymph nodes [ru

  17. Computing complex Airy functions by numerical quadrature

    NARCIS (Netherlands)

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2001-01-01

    textabstractIntegral representations are considered of solutions of the Airydifferential equation w''-z, w=0 for computing Airy functions for complex values of z.In a first method contour integral representations of the Airyfunctions are written as non-oscillating

  18. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  19. Exponential rise of dynamical complexity in quantum computing through projections.

    Science.gov (United States)

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  20. Computational complexity a quantitative perspective

    CERN Document Server

    Zimand, Marius

    2004-01-01

    There has been a common perception that computational complexity is a theory of "bad news" because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, "bad news" is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a "bad news" result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively. The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length. One chapter is dedicated to abs...

  1. Statistical screening of input variables in a complex computer code

    International Nuclear Information System (INIS)

    Krieger, T.J.

    1982-01-01

    A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results

  2. Atomic switch networks—nanoarchitectonic design of a complex system for natural computing

    International Nuclear Information System (INIS)

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Gimzewski, J K; Aono, M; Stieg, A Z

    2015-01-01

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems. (paper)

  3. Development of Onboard Computer Complex for Russian Segment of ISS

    Science.gov (United States)

    Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.

    1998-01-01

    Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.

  4. Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2011-01-01

    Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011

  5. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  6. Computer modeling of properties of complex molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)

    2015-03-10

    Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

  7. Comparing Virtual and Physical Robotics Environments for Supporting Complex Systems and Computational Thinking

    Science.gov (United States)

    Berland, Matthew; Wilensky, Uri

    2015-01-01

    Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…

  8. The complexity of computing the MCD-estimator

    DEFF Research Database (Denmark)

    Bernholt, T.; Fischer, Paul

    2004-01-01

    In modem statistics the robust estimation of parameters is a central problem, i.e., an estimation that is not or only slightly affected by outliers in the data. The minimum covariance determinant (MCD) estimator (J. Amer. Statist. Assoc. 79 (1984) 871) is probably one of the most important robust...... estimators of location and scatter. The complexity of computing the MCD, however, was unknown and generally thought to be exponential even if the dimensionality of the data is fixed. Here we present a polynomial time algorithm for MCD for fixed dimension of the data. In contrast we show that computing...... the MCD-estimator is NP-hard if the dimension varies. (C) 2004 Elsevier B.V. All rights reserved....

  9. Complex cellular logic computation using ribocomputing devices.

    Science.gov (United States)

    Green, Alexander A; Kim, Jongmin; Ma, Duo; Silver, Pamela A; Collins, James J; Yin, Peng

    2017-08-03

    Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our 'ribocomputing' systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.

  10. Modeling Cu{sup 2+}-Aβ complexes from computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2015-09-15

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  11. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  12. Complex systems relationships between control, communications and computing

    CERN Document Server

    2016-01-01

    This book gives a wide-ranging description of the many facets of complex dynamic networks and systems within an infrastructure provided by integrated control and supervision: envisioning, design, experimental exploration, and implementation. The theoretical contributions and the case studies presented can reach control goals beyond those of stabilization and output regulation or even of adaptive control. Reporting on work of the Control of Complex Systems (COSY) research program, Complex Systems follows from and expands upon an earlier collection: Control of Complex Systems by introducing novel theoretical techniques for hard-to-control networks and systems. The major common feature of all the superficially diverse contributions encompassed by this book is that of spotting and exploiting possible areas of mutual reinforcement between control, computing and communications. These help readers to achieve not only robust stable plant system operation but also properties such as collective adaptivity, integrity an...

  13. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  14. Current topics in pure and computational complex analysis

    CERN Document Server

    Dorff, Michael; Lahiri, Indrajit

    2014-01-01

    The book contains 13 articles, some of which are survey articles and others research papers. Written by eminent mathematicians, these articles were presented at the International Workshop on Complex Analysis and Its Applications held at Walchand College of Engineering, Sangli. All the contributing authors are actively engaged in research fields related to the topic of the book. The workshop offered a comprehensive exposition of the recent developments in geometric functions theory, planar harmonic mappings, entire and meromorphic functions and their applications, both theoretical and computational. The recent developments in complex analysis and its applications play a crucial role in research in many disciplines.

  15. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  16. Complexity vs energy: theory of computation and theoretical physics

    International Nuclear Information System (INIS)

    Manin, Y I

    2014-01-01

    This paper is a survey based upon the talk at the satellite QQQ conference to ECM6, 3Quantum: Algebra Geometry Information, Tallinn, July 2012. It is dedicated to the analogy between the notions of complexity in theoretical computer science and energy in physics. This analogy is not metaphorical: I describe three precise mathematical contexts, suggested recently, in which mathematics related to (un)computability is inspired by and to a degree reproduces formalisms of statistical physics and quantum field theory.

  17. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  18. Complex network problems in physics, computer science and biology

    Science.gov (United States)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  19. Counting loop diagrams: computational complexity of higher-order amplitude evaluation

    International Nuclear Information System (INIS)

    Eijk, E. van; Kleiss, R.; Lazopoulos, A.

    2004-01-01

    We discuss the computational complexity of the perturbative evaluation of scattering amplitudes, both by the Caravaglios-Moretti algorithm and by direct evaluation of the individual diagrams. For a self-interacting scalar theory, we determine the complexity as a function of the number of external legs. We describe a method for obtaining the number of topologically inequivalent Feynman graphs containing closed loops, and apply this to 1- and 2-loop amplitudes. We also compute the number of graphs weighted by their symmetry factors, thus arriving at exact and asymptotic estimates for the average symmetry factor of diagrams. We present results for the asymptotic number of diagrams up to 10 loops, and prove that the average symmetry factor approaches unity as the number of external legs becomes large. (orig.)

  20. Computed tomography of von Meyenburg complex simulating micro-abscesses

    International Nuclear Information System (INIS)

    Sada, P.N.; Ramakrishna, B.

    1994-01-01

    A case is presented of a bile duct hamartoma in a 44 year old man being evaluated for abdominal pain. The computed tomography (CT) findings suggested micro-abscesses in the liver and a CT guided tru-cut biopsy showed von Meyenburg complex. 9 refs., 3 figs

  1. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Science.gov (United States)

    2012-08-22

    ... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...

  2. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  3. The information exchange between moduluses in the system of module programming of the computation complexes

    International Nuclear Information System (INIS)

    Zinin, A.I.; Kolesov, V.E.; Nevinitsa, A.I.

    1975-01-01

    The report contains description of the method of construction of computer programs complexes for computation purposes for M-220 computers using the ALGOL-60 code for programming. The complex is organised on the modulus system principle and can include substantial number of modulus programs. The information exchange between separate moduli is done by means of special interpreting program and the information unit exchanged is a specially arranged file of data. For addressing to the interpreting program in the ALGOL-60 frameworks small number of specially created procedure-codes is used. The method proposed gives possibilities to program separate moduli of the complex independently and to expand the complex if necessary. In this case separate moduli or groups of moduli depending on the method of segmentation of the general problem solved by the complex will be of the independent interest and could be used out of the complex as traditional programs. (author)

  4. Exact complexity: The spectral decomposition of intrinsic computation

    International Nuclear Information System (INIS)

    Crutchfield, James P.; Ellison, Christopher J.; Riechers, Paul M.

    2016-01-01

    We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ϵ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography. - Highlights: • We provide exact, closed-form expressions for a hidden stationary process' intrinsic computation. • These include information measures such as the excess entropy, transient information, and synchronization information and the entropy-rate finite-length approximations. • The method uses an epsilon-machine's mixed-state presentation. • The spectral decomposition of the mixed-state presentation relies on the recent development of meromorphic functional calculus for nondiagonalizable operators.

  5. Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Cirera, J

    2009-01-01

    Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...

  6. Single photon emission computed tomography in AIDS dementia complex

    International Nuclear Information System (INIS)

    Pohl, P.; Vogl, G.; Fill, H.; Roessler, H.Z.; Zangerle, R.; Gerstenbrand, F.

    1988-01-01

    Single photon emission computed tomography (SPECT) studies were performed in AIDS dementia complex using IMP in 12 patients (and HM-PAO in four of these same patients). In all patients, SPECT revealed either multiple or focal uptake defects, the latter corresponding with focal signs or symptoms in all but one case. Computerized tomography showed a diffuse cerebral atrophy in eight of 12 patients, magnetic resonance imaging exhibited changes like atrophy and/or leukoencephalopathy in two of five cases. Our data indicate that both disturbance of cerebral amine metabolism and alteration of local perfusion share in the pathogenesis of AIDS dementia complex. SPECT is an important aid in the diagnosis of AIDS dementia complex and contributes to the understanding of the pathophysiological mechanisms of this disorder

  7. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  8. Wireless Mobile Computing and its Links to Descriptive Complexity

    Czech Academy of Sciences Publication Activity Database

    Wiedermann, Jiří; Pardubská, D.

    2008-01-01

    Roč. 19, č. 4 (2008), s. 887-913 ISSN 0129-0541 R&D Projects: GA AV ČR 1ET100300517 Institutional research plan: CEZ:AV0Z10300504 Keywords : alternating Turing machine * simulation * simultaneous time-space complexity * wireless parallel Turing machine Subject RIV: IN - Informatics, Computer Science Impact factor: 0.554, year: 2008

  9. Complex computation in the retina

    Science.gov (United States)

    Deshmukh, Nikhil Rajiv

    Elucidating the general principles of computation in neural circuits is a difficult problem requiring both a tractable model circuit as well as sophisticated measurement tools. This thesis advances our understanding of complex computation in the salamander retina and its underlying circuitry and furthers the development of advanced tools to enable detailed study of neural circuits. The retina provides an ideal model system for neural circuits in general because it is capable of producing complex representations of the visual scene, and both its inputs and outputs are accessible to the experimenter. Chapter 2 describes the biophysical mechanisms that give rise to the omitted stimulus response in retinal ganglion cells described in Schwartz et al., (2007) and Schwartz and Berry, (2008). The extra response to omitted flashes is generated at the input to bipolar cells, and is separable from the characteristic latency shift of the OSR apparent in ganglion cells, which must occur downstream in the circuit. Chapter 3 characterizes the nonlinearities at the first synapse of the ON pathway in response to high contrast flashes and develops a phenomenological model that captures the effect of synaptic activation and intracellular signaling dynamics on flash responses. This work is the first attempt to model the dynamics of the poorly characterized mGluR6 transduction cascade unique to ON bipolar cells, and explains the second lobe of the biphasic flash response. Complementary to the study of neural circuits, recent advances in wafer-scale photolithography have made possible new devices to measure the electrical and mechanical properties of neurons. Chapter 4 reports a novel piezoelectric sensor that facilitates the simultaneous measurement of electrical and mechanical signals in neural tissue. This technology could reveal the relationship between the electrical activity of neurons and their local mechanical environment, which is critical to the study of mechanoreceptors

  10. The Intelligent Safety System: could it introduce complex computing into CANDU shutdown systems

    International Nuclear Information System (INIS)

    Hall, J.A.; Hinds, H.W.; Pensom, C.F.; Barker, C.J.; Jobse, A.H.

    1984-07-01

    The Intelligent Safety System is a computerized shutdown system being developed at the Chalk River Nuclear Laboratories (CRNL) for future CANDU nuclear reactors. It differs from current CANDU shutdown systems in both the algorithm used and the size and complexity of computers required to implement the concept. This paper provides an overview of the project, with emphasis on the computing aspects. Early in the project several needs leading to an introduction of computing complexity were identified, and a computing system that met these needs was conceived. The current work at CRNL centers on building a laboratory demonstration of the Intelligent Safety System, and evaluating the reliability and testability of the concept. Some fundamental problems must still be addressed for the Intelligent Safety System to be acceptable to a CANDU owner and to the regulatory authorities. These are also discussed along with a description of how the Intelligent Safety System might solve these problems

  11. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    Science.gov (United States)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  12. Reduced-Complexity Direction of Arrival Estimation Using Real-Valued Computation with Arbitrary Array Configurations

    Directory of Open Access Journals (Sweden)

    Feng-Gang Yan

    2018-01-01

    Full Text Available A low-complexity algorithm is presented to dramatically reduce the complexity of the multiple signal classification (MUSIC algorithm for direction of arrival (DOA estimation, in which both tasks of eigenvalue decomposition (EVD and spectral search are implemented with efficient real-valued computations, leading to about 75% complexity reduction as compared to the standard MUSIC. Furthermore, the proposed technique has no dependence on array configurations and is hence suitable for arbitrary array geometries, which shows a significant implementation advantage over most state-of-the-art unitary estimators including unitary MUSIC (U-MUSIC. Numerical simulations over a wide range of scenarios are conducted to show the performance of the new technique, which demonstrates that with a significantly reduced computational complexity, the new approach is able to provide a close accuracy to the standard MUSIC.

  13. ANCON: A code for the evaluation of complex fault trees in personal computers

    International Nuclear Information System (INIS)

    Napoles, J.G.; Salomon, J.; Rivero, J.

    1990-01-01

    Performing probabilistic safety analysis has been recognized worldwide as one of the more effective ways for further enhancing safety of Nuclear Power Plants. The evaluation of fault trees plays a fundamental role in these analysis. Some existing limitations in RAM and execution speed of personal computers (PC) has restricted so far their use in the analysis of complex fault trees. Starting from new approaches in the data structure and other possibilities the ANCON code can evaluate complex fault trees in a PC, allowing the user to do a more comprehensive analysis of the considered system in reduced computing time

  14. Computer Simulations and Theoretical Studies of Complex Systems: from complex fluids to frustrated magnets

    Science.gov (United States)

    Choi, Eunsong

    Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We

  15. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  16. A design of a computer complex including vector processors

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1982-12-01

    We, members of the Computing Center, Japan Atomic Energy Research Institute have been engaged for these six years in the research of adaptability of vector processing to large-scale nuclear codes. The research has been done in collaboration with researchers and engineers of JAERI and a computer manufacturer. In this research, forty large-scale nuclear codes were investigated from the viewpoint of vectorization. Among them, twenty-six codes were actually vectorized and executed. As the results of the investigation, it is now estimated that about seventy percents of nuclear codes and seventy percents of our total amount of CPU time of JAERI are highly vectorizable. Based on the data obtained by the investigation, (1)currently vectorizable CPU time, (2)necessary number of vector processors, (3)necessary manpower for vectorization of nuclear codes, (4)computing speed, memory size, number of parallel 1/0 paths, size and speed of 1/0 buffer of vector processor suitable for our applications, (5)necessary software and operational policy for use of vector processors are discussed, and finally (6)a computer complex including vector processors is presented in this report. (author)

  17. Investigation of anticancer properties of caffeinated complexes via computational chemistry methods

    Science.gov (United States)

    Sayin, Koray; Üngördü, Ayhan

    2018-03-01

    Computational investigations were performed for 1,3,7-trimethylpurine-2,6-dione, 3,7-dimethylpurine-2,6-dione, their Ru(II) and Os(III) complexes. B3LYP/6-311 ++G(d,p)(LANL2DZ) level was used in numerical calculations. Geometric parameters, IR spectrum, 1H-, 13C and 15N NMR spectrum were examined in detail. Additionally, contour diagram of frontier molecular orbitals (FMOs), molecular electrostatic potential (MEP) maps, MEP contour and some quantum chemical descriptors were used in the determination of reactivity rankings and active sites. The electron density on the surface was similar to each other in studied complexes. Quantum chemical descriptors were investigated and the anticancer activity of complexes were more than cisplatin and their ligands. Additionally, molecular docking calculations were performed in water between related complexes and a protein (ID: 3WZE). The most interact complex was found as Os complex. The interaction energy was calculated as 342.9 kJ/mol.

  18. Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention

    NARCIS (Netherlands)

    Koning, L. de; Maanen, P.P. van; Dongen, K. van

    2008-01-01

    Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were

  19. Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas

    2013-01-01

    sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...

  20. Complexity and Intensionality in a Type-1 Framework for Computable Analysis

    DEFF Research Database (Denmark)

    Lambov, Branimir Zdravkov

    2005-01-01

    This paper describes a type-1 framework for computable analysis designed to facilitate efficient implementations and discusses properties that have not been well studied before for type-1 approaches: the introduction of complexity measures for type-1 representations of real functions, and ways...

  1. Platinum Group Thiophenoxyimine Complexes: Syntheses,Crystallographic and Computational Studies of Structural Properties

    Energy Technology Data Exchange (ETDEWEB)

    Krinsky, Jamin L.; Arnold, John; Bergman, Robert G.

    2006-10-03

    Monomeric thiosalicylaldiminate complexes of rhodium(I) and iridium(I) were prepared by ligand transfer from the homoleptic zinc(II) species. In the presence of strongly donating ligands, the iridium complexes undergo insertion of the metal into the imine carbon-hydrogen bond. Thiophenoxyketimines were prepared by non-templated reaction of o-mercaptoacetophenone with anilines, and were complexed with rhodium(I), iridium(I), nickel(II) and platinum(II). X-ray crystallographic studies showed that while the thiosalicylaldiminate complexes display planar ligand conformations, those of the thiophenoxyketiminates are strongly distorted. Results of a computational study were consistent with a steric-strain interpretation of the difference in preferred ligand geometries.

  2. Impact of familiarity on information complexity in human-computer interfaces

    Directory of Open Access Journals (Sweden)

    Bakaev Maxim

    2016-01-01

    Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.

  3. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  4. Computational Strategies for Dissecting the High-Dimensional Complexity of Adaptive Immune Repertoires

    Directory of Open Access Journals (Sweden)

    Enkelejda Miho

    2018-02-01

    Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.

  5. LT^2C^2: A language of thought with Turing-computable Kolmogorov complexity

    Directory of Open Access Journals (Sweden)

    Santiago Figueira

    2013-03-01

    Full Text Available In this paper, we present a theoretical effort to connect the theory of program size to psychology by implementing a concrete language of thought with Turing-computable Kolmogorov complexity (LT^2C^2 satisfying the following requirements: 1 to be simple enough so that the complexity of any given finite binary sequence can be computed, 2 to be based on tangible operations of human reasoning (printing, repeating,. . . , 3 to be sufficiently powerful to generate all possible sequences but not too powerful as to identify regularities which would be invisible to humans. We first formalize LT^2C^2, giving its syntax and semantics, and defining an adequate notion of program size. Our setting leads to a Kolmogorov complexity function relative to LT^2C^2 which is computable in polynomial time, and it also induces a prediction algorithm in the spirit of Solomonoff’s inductive inference theory. We then prove the efficacy of this language by investigating regularities in strings produced by participants attempting to generate random strings. Participants had a profound understanding of randomness and hence avoided typical misconceptions such as exaggerating the number of alternations. We reasoned that remaining regularities would express the algorithmic nature of human thoughts, revealed in the form of specific patterns. Kolmogorov complexity relative to LT^2C^2 passed three expected tests examined here: 1 human sequences were less complex than control PRNG sequences, 2 human sequences were not stationary showing decreasing values of complexity resulting from fatigue 3 each individual showed traces of algorithmic stability since fitting of partial data was more effective to predict subsequent data than average fits. This work extends on previous efforts to combine notions of Kolmogorov complexity theory and algorithmic information theory to psychology, by explicitly proposing a language which may describe the patterns of human thoughts.Received: 12

  6. Analyzing the Implicit Computational Complexity of object-oriented programs

    OpenAIRE

    Marion , Jean-Yves; Péchoux , Romain

    2008-01-01

    International audience; A sup-interpretation is a tool which provides upper bounds on the size of the values computed by the function symbols of a program. Sup-interpretations have shown their interest to deal with the complexity of first order functional programs. This paper is an attempt to adapt the framework of sup-interpretations to a fragment of object-oriented programs, including loop and while constructs and methods with side effects. We give a criterion, called brotherly criterion, w...

  7. GAM-HEAT -- a computer code to compute heat transfer in complex enclosures

    International Nuclear Information System (INIS)

    Cooper, R.E.; Taylor, J.R.; Kielpinski, A.L.; Steimke, J.L.

    1991-02-01

    The GAM-HEAT code was developed for heat transfer analyses associated with postulated Double Ended Guillotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re- radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices, and also accounts for convective, conductive, and advective heat exchanges. The code is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium. The GAM-HEAT code has been exercised extensively for computing transient temperatures in SRS reactors with specific charges and control components. Results from these computations have been used to establish the need for and to evaluate hardware modifications designed to mitigate results of postulated accident scenarios, and to assist in the specification of safe reactor operating power limits. The code utilizes temperature dependence on material properties. The efficiency of the code has been enhanced by the use of an iterative equation solver. Verification of the code to date consists of comparisons with parallel efforts at Los Alamos National Laboratory and with similar efforts at Westinghouse Science and Technology Center in Pittsburgh, PA, and benchmarked using problems with known analytical or iterated solutions. All comparisons and tests yield results that indicate the GAM-HEAT code performs as intended

  8. Theory of computational complexity

    CERN Document Server

    Du, Ding-Zhu

    2011-01-01

    DING-ZHU DU, PhD, is a professor in the Department of Computer Science at the University of Minnesota. KER-I KO, PhD, is a professor in the Department of Computer Science at the State University of New York at Stony Brook.

  9. Using the calculational simulating complexes when making the computer process control systems for NPP

    International Nuclear Information System (INIS)

    Zimakov, V.N.; Chernykh, V.P.

    1998-01-01

    The problems on creating calculational-simulating (CSC) and their application by developing the program and program-technical means for computer-aided process control systems at NPP are considered. The abo- ve complex is based on the all-mode real time mathematical model, functioning at a special complex of computerized means

  10. Computational complexity of symbolic dynamics at the onset of chaos

    Science.gov (United States)

    Lakdawala, Porus

    1996-05-01

    In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behavior of cellular automata, that the computational basis for modeling this region is the universal Turing machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.

  11. A general method for computing the total solar radiation force on complex spacecraft structures

    Science.gov (United States)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  12. A user's manual of Tools for Error Estimation of Complex Number Matrix Computation (Ver.1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi.

    1997-03-01

    'Tools for Error Estimation of Complex Number Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the complex number linear system's solutions or the Hermitian matrices' eigen values. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calulate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The error estimation subroutines for Hermitian matrix eigen values' derive the error ranges of the eigen values according to the Korn-Kato's formula. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  13. Physiological Dynamics in Demyelinating Diseases: Unraveling Complex Relationships through Computer Modeling

    Directory of Open Access Journals (Sweden)

    Jay S. Coggan

    2015-09-01

    Full Text Available Despite intense research, few treatments are available for most neurological disorders. Demyelinating diseases are no exception. This is perhaps not surprising considering the multifactorial nature of these diseases, which involve complex interactions between immune system cells, glia and neurons. In the case of multiple sclerosis, for example, there is no unanimity among researchers about the cause or even which system or cell type could be ground zero. This situation precludes the development and strategic application of mechanism-based therapies. We will discuss how computational modeling applied to questions at different biological levels can help link together disparate observations and decipher complex mechanisms whose solutions are not amenable to simple reductionism. By making testable predictions and revealing critical gaps in existing knowledge, such models can help direct research and will provide a rigorous framework in which to integrate new data as they are collected. Nowadays, there is no shortage of data; the challenge is to make sense of it all. In that respect, computational modeling is an invaluable tool that could, ultimately, transform how we understand, diagnose, and treat demyelinating diseases.

  14. Recent Developments in Complex Analysis and Computer Algebra

    CERN Document Server

    Kajiwara, Joji; Xu, Yongzhi

    1999-01-01

    This volume consists of papers presented in the special sessions on "Complex and Numerical Analysis", "Value Distribution Theory and Complex Domains", and "Use of Symbolic Computation in Mathematics Education" of the ISAAC'97 Congress held at the University of Delaware, during June 2-7, 1997. The ISAAC Congress coincided with a U.S.-Japan Seminar also held at the University of Delaware. The latter was supported by the National Science Foundation through Grant INT-9603029 and the Japan Society for the Promotion of Science through Grant MTCS-134. It was natural that the participants of both meetings should interact and consequently several persons attending the Congress also presented papers in the Seminar. The success of the ISAAC Congress and the U.S.-Japan Seminar has led to the ISAAC'99 Congress being held in Fukuoka, Japan during August 1999. Many of the same participants will return to this Seminar. Indeed, it appears that the spirit of the U.S.-Japan Seminar will be continued every second year as part of...

  15. EXAFS Phase Retrieval Solution Tracking for Complex Multi-Component System: Synthesized Topological Inverse Computation

    International Nuclear Information System (INIS)

    Lee, Jay Min; Yang, Dong-Seok; Bunker, Grant B

    2013-01-01

    Using the FEFF kernel A(k,r), we describe the inverse computation from χ(k)-data to g(r)-solution in terms of a singularity regularization method based on complete Bayesian statistics process. In this work, we topologically decompose the system-matched invariant projection operators into two distinct types, (A + AA + A) and (AA + AA + ), and achieved Synthesized Topological Inversion Computation (STIC), by employing a 12-operator-closed-loop emulator of the symplectic transformation. This leads to a numerically self-consistent solution as the optimal near-singular regularization parameters are sought, dramatically suppressing instability problems connected with finite precision arithmetic in ill-posed systems. By statistically correlating a pair of measured data, it was feasible to compute an optimal EXAFS phase retrieval solution expressed in terms of the complex-valued χ(k), and this approach was successfully used to determine the optimal g(r) for a complex multi-component system.

  16. Insight into the structures and stabilities of Tc and Re DMSA complexes: A computational study

    International Nuclear Information System (INIS)

    Blanco González, Alejandro; Hernández Valdés, Daniel; García Fleitas, Ariel; Rodríguez Riera, Zalua; Jáuregui Haza, Ulises

    2016-01-01

    Meso-2,3-dimercaptosuccinic acid (DMSA) is used in nuclear medicine as ligand for preparation of radiopharmaceuticals for diagnostic and therapy. DMSA has been the subject of numerous investigations during the past three decades and new and significant information of the chemistry and pharmacology of DMSA complexes have emerged. In comparison to other ligands, the structure of some DMSA complexes is unclear up today. The structures and applications of DMSA complexes are strictly dependent on the chemical conditions of their preparation, especially pH and the ratio of components. A computational study of M-DMSA (M = Tc, Re) complexes has been performed using density functional theory. Different isomers for M(V) and M(III) complexes were study. The pH influence over ligand structures was taken into account and the solvent effect was evaluated using an implicit solvation model. The fully optimized complex syn-endo Re(V)-DMSA shows a geometry similar to the X-ray data and was used to validate the methodology. Moreover, new alternative structures for the renal agent 99mTc(III)-DMSA were proposed and computationally studied. For two complex structures, a larger stability respect to that proposed in the literature was obtained. Furthermore, Tc(V)-DMSA complexes are more stable than the Tc(III)-DMSA proposed structures. In general, Re complexes are more stables than the corresponding Tc ones. (author)

  17. Distinguishing humans from computers in the game of go: A complex network approach

    Science.gov (United States)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  18. Development of an Evaluation Method for the Design Complexity of Computer-Based Displays

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Kang, Hyun Gook; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The importance of the design of human machine interfaces (HMIs) for human performance and the safety of process industries has long been continuously recognized for many decades. Especially, in the case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs because poor HMIs can impair the decision making ability of human operators. In order to support and increase the decision making ability of human operators, advanced HMIs based on the up-to-date computer technology are provided. Human operators in advanced main control room (MCR) acquire information through video display units (VDUs) and large display panel (LDP), which is required for the operation of NPPs. These computer-based displays contain a huge amount of information and present it with a variety of formats compared to those of a conventional MCR. For example, these displays contain more display elements such as abbreviations, labels, icons, symbols, coding, etc. As computer-based displays contain more information, the complexity of advanced displays becomes greater due to less distinctiveness of each display element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. This study covers the early phase in the development of an evaluation method for the design complexity of computer-based displays. To this end, a series of existing studies were reviewed to suggest an appropriate concept that is serviceable to unravel this problem

  19. Computation of 3D form factors in complex environments

    International Nuclear Information System (INIS)

    Coulon, N.

    1989-01-01

    The calculation of radiant interchange among opaque surfaces in a complex environment poses the general problem of determining the visible and hidden parts of the environment. In many thermal engineering applications, surfaces are separated by radiatively non-participating media and may be idealized as diffuse emitters and reflectors. Consenquently the net radiant energy fluxes are intimately related to purely geometrical quantities called form factors, that take into account hidden parts: the problem is reduced to the form factor evaluation. This paper presents the method developed for the computation of 3D form factors in the finite-element module of the system TRIO, which is a general computer code for thermal and fluid flow analysis. The method is derived from an algorithm devised for synthetic image generation. A comparison is performed with the standard contour integration method also implemented and suited to convex geometries. Several illustrative examples of finite-element thermal calculations in radiating enclosures are given

  20. Towards electromechanical computation: An alternative approach to realize complex logic circuits

    KAUST Repository

    Hafiz, Md Abdullah Al; Kosuru, Lakshmoji; Younis, Mohammad I.

    2016-01-01

    Electromechanical computing based on micro/nano resonators has recently attracted significant attention. However, full implementation of this technology has been hindered by the difficulty in realizing complex logic circuits. We report here an alternative approach to realize complex logic circuits based on multiple MEMS resonators. As case studies, we report the construction of a single-bit binary comparator, a single-bit 4-to-2 encoder, and parallel XOR/XNOR and AND/NOT logic gates. Toward this, several microresonators are electrically connected and their resonance frequencies are tuned through an electrothermal modulation scheme. The microresonators operating in the linear regime do not require large excitation forces, and work at room temperature and at modest air pressure. This study demonstrates that by reconfiguring the same basic building block, tunable resonator, several essential complex logic functions can be achieved.

  1. Towards electromechanical computation: An alternative approach to realize complex logic circuits

    KAUST Repository

    Hafiz, M. A. A.

    2016-08-18

    Electromechanical computing based on micro/nano resonators has recently attracted significant attention. However, full implementation of this technology has been hindered by the difficulty in realizing complex logic circuits. We report here an alternative approach to realize complex logic circuits based on multiple MEMS resonators. As case studies, we report the construction of a single-bit binary comparator, a single-bit 4-to-2 encoder, and parallel XOR/XNOR and AND/NOT logic gates. Toward this, several microresonators are electrically connected and their resonance frequencies are tuned through an electrothermal modulation scheme. The microresonators operating in the linear regime do not require large excitation forces, and work at room temperature and at modest air pressure. This study demonstrates that by reconfiguring the same basic building block, tunable resonator, several essential complex logic functions can be achieved.

  2. Computational Redox Potential Predictions: Applications to Inorganic and Organic Aqueous Complexes, and Complexes Adsorbed to Mineral Surfaces

    Directory of Open Access Journals (Sweden)

    Krishnamoorthy Arumugam

    2014-04-01

    Full Text Available Applications of redox processes range over a number of scientific fields. This review article summarizes the theory behind the calculation of redox potentials in solution for species such as organic compounds, inorganic complexes, actinides, battery materials, and mineral surface-bound-species. Different computational approaches to predict and determine redox potentials of electron transitions are discussed along with their respective pros and cons for the prediction of redox potentials. Subsequently, recommendations are made for certain necessary computational settings required for accurate calculation of redox potentials. This article reviews the importance of computational parameters, such as basis sets, density functional theory (DFT functionals, and relativistic approaches and the role that physicochemical processes play on the shift of redox potentials, such as hydration or spin orbit coupling, and will aid in finding suitable combinations of approaches for different chemical and geochemical applications. Identifying cost-effective and credible computational approaches is essential to benchmark redox potential calculations against experiments. Once a good theoretical approach is found to model the chemistry and thermodynamics of the redox and electron transfer process, this knowledge can be incorporated into models of more complex reaction mechanisms that include diffusion in the solute, surface diffusion, and dehydration, to name a few. This knowledge is important to fully understand the nature of redox processes be it a geochemical process that dictates natural redox reactions or one that is being used for the optimization of a chemical process in industry. In addition, it will help identify materials that will be useful to design catalytic redox agents, to come up with materials to be used for batteries and photovoltaic processes, and to identify new and improved remediation strategies in environmental engineering, for example the

  3. Computer analysis of potentiometric data of complexes formation in the solution

    Science.gov (United States)

    Jastrzab, Renata; Kaczmarek, Małgorzata T.; Tylkowski, Bartosz; Odani, Akira

    2018-02-01

    The determination of equilibrium constants is an important process for many branches of chemistry. In this review we provide the readers with a discussion on computer methods which have been applied for elaboration of potentiometric experimental data generated during complexes formation in solution. The review describes both: general basis of modeling tools and examples of the use of calculated stability constants.

  4. The impact of treatment complexity and computer-control delivery technology on treatment delivery errors

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; Lash, Kathy L.; Matrone, Gwynne M.; Volkman, Susan K.; McShan, Daniel L.; Kessler, Marc L.; Lichter, Allen S.

    1998-01-01

    Purpose: To analyze treatment delivery errors for three-dimensional (3D) conformal therapy performed at various levels of treatment delivery automation and complexity, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system (CCRS). Methods and Materials: All treatment delivery errors which occurred in our department during a 15-month period were analyzed. Approximately 34,000 treatment sessions (114,000 individual treatment segments [ports]) on four treatment machines were studied. All treatment delivery errors logged by treatment therapists or quality assurance reviews (152 in all) were analyzed. Machines 'M1' and 'M2' were operated in a standard manual setup mode, with no record and verify system (R/V). MLC machines 'M3' and 'M4' treated patients under the control of the CCRS system, which (1) downloads the treatment delivery plan from the planning system; (2) performs some (or all) of the machine set up and treatment delivery for each field; (3) monitors treatment delivery; (4) records all treatment parameters; and (5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3; therefore, it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments, nonaxial and noncoplanar plans, multisegment intensity modulation, and pseudoisocentric treatments studied for a 6-month period (505 patients) concurrent with the period in which the delivery errors were obtained. Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines; therefore, this analysis does not depend on fixed therapist staff on particular

  5. Stochastic equations for complex systems theoretical and computational topics

    CERN Document Server

    Bessaih, Hakima

    2015-01-01

    Mathematical analyses and computational predictions of the behavior of complex systems are needed to effectively deal with weather and climate predictions, for example, and the optimal design of technical processes. Given the random nature of such systems and the recognized relevance of randomness, the equations used to describe such systems usually need to involve stochastics.  The basic goal of this book is to introduce the mathematics and application of stochastic equations used for the modeling of complex systems. A first focus is on the introduction to different topics in mathematical analysis. A second focus is on the application of mathematical tools to the analysis of stochastic equations. A third focus is on the development and application of stochastic methods to simulate turbulent flows as seen in reality.  This book is primarily oriented towards mathematics and engineering PhD students, young and experienced researchers, and professionals working in the area of stochastic differential equations ...

  6. Probabilistic data integration and computational complexity

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under

  7. Development and Testing of a High Capacity Plasma Chemical Reactor in the Ukraine

    Energy Technology Data Exchange (ETDEWEB)

    Reilly, Raymond W.

    2012-07-30

    This project, Development and Testing of a High Capacity Plasma Chemical Reactor in the Ukraine was established at the Kharkiv Institute of Physics and Technology (KIPT). The associated CRADA was established with Campbell Applied Physics (CAP) located in El Dorado Hills, California. This project extends an earlier project involving both CAP and KIPT conducted under a separate CRADA. The initial project developed the basic Plasma Chemical Reactor (PCR) for generation of ozone gas. This project built upon the technology developed in the first project, greatly enhancing the output of the PCR while also improving reliability and system control.

  8. Simple boron removal from seawater by using polyols as complexing agents: A computational mechanistic study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min-Kyung; Eom, Ki Heon; Lim, Jun-Heok; Lee, Jea-Keun; Lee, Ju Dong; Won, Yong Sun [Pukyong National University, Busan (Korea, Republic of)

    2015-11-15

    The complexation of boric acid (B(OH){sub 3}), the primary form of aqueous boron at moderate pH, with polyols is proposed and mechanistically studied as an efficient way to improve membrane processes such as reverse osmosis (RO) for removing boron in seawater by increasing the size of aqueous boron compounds. Computational chemistry based on the density functional theory (DFT) was used to manifest the reaction pathways of the complexation of B(OH){sub 3} with various polyols such as glycerol, xylitol, and mannitol. The reaction energies were calculated as −80.6, −98.1, and −87.2 kcal/mol for glycerol, xylitol, and mannitol, respectively, indicating that xylitol is the most thermodynamically favorable for the complexation with B(OH){sub 3}. Moreover, the 1 : 2 molar ratio of B(OH)3 to polyol was found to be more favorable than the ratio of 1 : 1 for the complexation. Meanwhile, latest lab-scale actual RO experiments successfully supported our computational prediction that 2 moles of xylitol are the most effective as the complexing agent for 1 mole of B(OH){sub 3} in aqueous solution.

  9. [The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].

    Science.gov (United States)

    Ganin, I P; Kaplan, A Ia

    2014-01-01

    The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.

  10. A complex network approach to cloud computing

    International Nuclear Information System (INIS)

    Travieso, Gonzalo; Ruggiero, Carlos Antônio; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2016-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the clients’ tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlaid by Erdős–Rényi (ER) and Barabási-Albert (BA) topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of the cost of communication between the client and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter, the ER topology provides better performance than the BA for smaller average degrees and opposite behaviour for larger average degrees. With respect to cost, smaller values are found in the BA topology irrespective of the average degree. In addition, we also verified that it is easier to find good servers in ER than in BA networks. Surprisingly, balance and cost are not too much affected by the presence of communities. However, for a well-defined community network, we found that it is important to assign each server to a different community so as to achieve better performance. (paper: interdisciplinary statistical mechanics )

  11. Assessing the impact of large-scale computing on the size and complexity of first-principles electromagnetic models

    International Nuclear Information System (INIS)

    Miller, E.K.

    1990-01-01

    There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs

  12. Sustaining Economic Exploitation of Complex Ecosystems in Computational Models of Coupled Human-Natural Networks

    OpenAIRE

    Martinez, Neo D.; Tonin, Perrine; Bauer, Barbara; Rael, Rosalyn C.; Singh, Rahul; Yoon, Sangyuk; Yoon, Ilmi; Dunne, Jennifer A.

    2012-01-01

    Understanding ecological complexity has stymied scientists for decades. Recent elucidation of the famously coined "devious strategies for stability in enduring natural systems" has opened up a new field of computational analyses of complex ecological networks where the nonlinear dynamics of many interacting species can be more realistically mod-eled and understood. Here, we describe the first extension of this field to include coupled human-natural systems. This extension elucidates new strat...

  13. The current state of development works for manufacturing and methods of controlling the nuclear fuel for NPPs of Ukraine

    International Nuclear Information System (INIS)

    Odeychuk, N.P.; Levenets, V.V.; Krasnorutsky, V.S.

    2000-01-01

    The paper presents the results of NSC KIPT researches on manufacturing the fuel microspheres and pellets based on uranium dioxide. The data on fuel characteristics for different manufacturing stages are given. The problems of improving the fuel quality with changing the structure characteristics of pellets are considered. Demonstrated is the hardware for pellet controlling and presented are the new ways for developing the methods of controlling the nuclear fuel: X-ray fluorescent analysis; complex of nuclear-physical methods on the base of accelerators; laser-excitation energy-mass-spectrometer. (author)

  14. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Directory of Open Access Journals (Sweden)

    Samreen Laghari

    Full Text Available Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT implies an inherent difficulty in modeling problems.It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS. The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC framework to model a Complex communication network problem.We use Exploratory Agent-based Modeling (EABM, as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  15. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Science.gov (United States)

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  16. A computational approach to achieve situational awareness from limited observations of a complex system

    Science.gov (United States)

    Sherwin, Jason

    human activities. Nevertheless, since it is not constrained by computational details, the study of situational awareness provides a unique opportunity to approach complex tasks of operation from an analytical perspective. In other words, with SA, we get to see how humans observe, recognize and react to complex systems on which they exert some control. Reconciling this perspective on complexity with complex systems research, it might be possible to further our understanding of complex phenomena if we can probe the anatomical mechanisms by which we, as humans, do it naturally. At this unique intersection of two disciplines, a hybrid approach is needed. So in this work, we propose just such an approach. In particular, this research proposes a computational approach to the situational awareness (SA) of complex systems. Here we propose to implement certain aspects of situational awareness via a biologically-inspired machine-learning technique called Hierarchical Temporal Memory (HTM). In doing so, we will use either simulated or actual data to create and to test computational implementations of situational awareness. This will be tested in two example contexts, one being more complex than the other. The ultimate goal of this research is to demonstrate a possible approach to analyzing and understanding complex systems. By using HTM and carefully developing techniques to analyze the SA formed from data, it is believed that this goal can be obtained.

  17. Control system by the technological electron Linac KUT-20

    CERN Document Server

    Akchurin, Y I; Gurin, V A; Demidov, N V

    2001-01-01

    The high-power technological electron linac KUT-20 was developed at the Science Research Complex 'Accelerator' of NSC KIPT. The linac consists of two 1.2 m length accelerating structures with a variable geometry and an injector. The latter comprises a diode electron gun,a klystron type buncher and an accelerating cavity.With a RF supply power at accelerating structure entries of 11 MW and with a current at the accelerator exit of 1A,the beam energy will be up to 20 MeV.An average beam power is planned to be 20 kW.All systems of the accelerator are controlled by a computerised control system. The program and technical complex consist of PC equipped with fast ADC control console, synchronization unit, microprocessor-operated complexes.

  18. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation

    Directory of Open Access Journals (Sweden)

    Vincenzo G. Fiore

    2017-08-01

    Full Text Available The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed

  19. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation.

    Science.gov (United States)

    Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank

    2017-01-01

    The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features

  20. COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which

  1. Charge transfer complex between 2,3-diaminopyridine with chloranilic acid. Synthesis, characterization and DFT, TD-DFT computational studies

    Science.gov (United States)

    Al-Ahmary, Khairia M.; Habeeb, Moustafa M.; Al-Obidan, Areej H.

    2018-05-01

    New charge transfer complex (CTC) between the electron donor 2,3-diaminopyridine (DAP) with the electron acceptor chloranilic (CLA) acid has been synthesized and characterized experimentally and theoretically using a variety of physicochemical techniques. The experimental work included the use of elemental analysis, UV-vis, IR and 1H NMR studies to characterize the complex. Electronic spectra have been carried out in different hydrogen bonded solvents, methanol (MeOH), acetonitrile (AN) and 1:1 mixture from AN-MeOH. The molecular composition of the complex was identified to be 1:1 from Jobs and molar ratio methods. The stability constant was determined using minimum-maximum absorbances method where it recorded high values confirming the high stability of the formed complex. The solid complex was prepared and characterized by elemental analysis that confirmed its formation in 1:1 stoichiometric ratio. Both IR and NMR studies asserted the existence of proton and charge transfers in the formed complex. For supporting the experimental results, DFT computations were carried out using B3LYP/6-31G(d,p) method to compute the optimized structures of the reactants and complex, their geometrical parameters, reactivity parameters, molecular electrostatic potential map and frontier molecular orbitals. The analysis of DFT results strongly confirmed the high stability of the formed complex based on existing charge transfer beside proton transfer hydrogen bonding concordant with experimental results. The origin of electronic spectra was analyzed using TD-DFT method where the observed λmax are strongly consisted with the computed ones. TD-DFT showed the contributed states for various electronic transitions.

  2. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu

    1996-01-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  3. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  4. Computational complexity of algorithms for sequence comparison, short-read assembly and genome alignment.

    Science.gov (United States)

    Baichoo, Shakuntala; Ouzounis, Christos A

    A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Computational Cellular Dynamics Based on the Chemical Master Equation: A Challenge for Understanding Complexity.

    Science.gov (United States)

    Liang, Jie; Qian, Hong

    2010-01-01

    Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand "complex behavior" and complexity theory, and from which important biological insight can be gained.

  6. Computational model of dose response for low-LET-induced complex chromosomal aberrations

    International Nuclear Information System (INIS)

    Eidelman, Y.A.; Andreev, S.G.

    2015-01-01

    Experiments with full-colour mFISH chromosome painting have revealed high yield of radiation-induced complex chromosomal aberrations (CAs). The ratio of complex to simple aberrations is dependent on cell type and linear energy transfer. Theoretical analysis has demonstrated that the mechanism of CA formation as a result of interaction between lesions at a surface of chromosome territories does not explain high complexes-to-simples ratio in human lymphocytes. The possible origin of high yields of γ-induced complex CAs was investigated in the present work by computer simulation. CAs were studied on the basis of chromosome structure and dynamics modelling and the hypothesis of CA formation on nuclear centres. The spatial organisation of all chromosomes in a human interphase nucleus was predicted by simulation of mitosis-to-interphase chromosome structure transition. Two scenarios of CA formation were analysed, 'static' (existing in a nucleus prior to irradiation) centres and 'dynamic' (formed in response to irradiation) centres. The modelling results reveal that under certain conditions, both scenarios explain quantitatively the dose-response relationships for both simple and complex γ-induced inter-chromosomal exchanges observed by mFISH chromosome painting in the first post-irradiation mitosis in human lymphocytes. (authors)

  7. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    Science.gov (United States)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  8. High performance parallel computing of flows in complex geometries: I. Methods

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Montagnac, M; Vermorel, O; Staffelbach, G; Garcia, M; Boussuge, J-F; Gazaix, M; Poinsot, T

    2009-01-01

    Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.

  9. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  10. Environmental Factors Affecting Computer Assisted Language Learning Success: A Complex Dynamic Systems Conceptual Model

    Science.gov (United States)

    Marek, Michael W.; Wu, Wen-Chi Vivian

    2014-01-01

    This conceptual, interdisciplinary inquiry explores Complex Dynamic Systems as the concept relates to the internal and external environmental factors affecting computer assisted language learning (CALL). Based on the results obtained by de Rosnay ["World Futures: The Journal of General Evolution", 67(4/5), 304-315 (2011)], who observed…

  11. Cooperative efforts to improve nuclear materials accounting, control and physical protection at the National Science Center, Kharkov Institute of Physics and Technology

    International Nuclear Information System (INIS)

    Zelensky, V.F.; Mikhailov, V.A.

    1996-01-01

    The US Department of Energy (DOE) and the Ukrainian Government are engaged in a program of cooperation to enhance the nonproliferation of nuclear weapons by developing a strong national system of nuclear material protection, control, and accounting (MPC and A). This paper describes the capabilities and work of the Kharkov Institute of Physics and Technology (KIPT) and cooperative efforts to improve MPC and A at this facility. It describes how these cooperative efforts grew out of Ukraine''s decision to become a non-nuclear weapon state and the shortcomings in MPC and A that developed at KIPT after the disintegration of the former Soviet Union. It also envisions expanded future cooperation in other areas of nuclear materials management

  12. New computational paradigms changing conceptions of what is computable

    CERN Document Server

    Cooper, SB; Sorbi, Andrea

    2007-01-01

    This superb exposition of a complex subject examines new developments in the theory and practice of computation from a mathematical perspective. It covers topics ranging from classical computability to complexity, from biocomputing to quantum computing.

  13. A unified approach to computing real and complex zeros of zero-dimensional ideals

    NARCIS (Netherlands)

    J.B. Lasserre; M. Laurent (Monique); P. Rostalski; M. Putinar; S. Sullivant

    2009-01-01

    textabstractIn this paper we propose a unified methodology for computing the set $V_K(I)$ of complex ($K = C$) or real ($K = R$) roots of an ideal $I$ in $R[x]$, assuming $V_K(I)$ is finite. We show how moment matrices, defined in terms of a given set of generators of the ideal I, can be used to

  14. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia

    2014-01-01

    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  15. Stop: a fast procedure for the exact computation of the performance of complex probabilistic systems

    International Nuclear Information System (INIS)

    Corynen, G.C.

    1982-01-01

    A new set-theoretic method for the exact and efficient computation of the probabilistic performance of complex systems has been developed. The core of the method is a fast algorithm for disjointing a collection of product sets which is intended for systems with more than 1000 components and 100,000 cut sets. The method is based on a divide-and-conquer approach, in which a multidimensional problem is progressively decomposed into lower-dimensional subproblems along its dimensions. The method also uses a particular pointer system that eliminates the need to store the subproblems by only requiring the storage of pointers to those problems. Examples of the algorithm and the divide-and-conquer strategy are provided, and comparisons with other significant methods are made. Statistical complexity studies show that the expected time and space complexity of other methods is O(me/sup n/), but that our method is O(nm 3 log(m)). Problems which would require days of Cray-1 computer time with present methods can now be solved in seconds. Large-scale systems that can only be approximated with other techniques can now also be evaluated exactly

  16. Lectures on a theory of computation and complexity over the reals (or an arbitrary ring)

    International Nuclear Information System (INIS)

    Blum, L.

    1990-01-01

    These lectures will discuss a new theory of computation and complexity which attempts to integrate key ideas from the classical theory in a setting more amenable to problems defined over continuous domains. The approach taken here is both algebraic and concrete; the underlying space is an arbitrary ring (or field) and the basic operations are polynominal (or rational) maps and tests. This approach yields results in the continuous setting analogous to the pivotal classical results of undecidability and NP-completeness over the integers, yet reflecting the special mathematical character of the underlying space. The goal of these lectures is to highlight key aspects of the new theory as well as to give exposition, in this setting, of classical ideas and results. Indeed, since this new theory is more mathematical, perhaps less dependent on logic than the classical theory, a number of key results have more straightforward and transparent proofs in this setting. One of our themes will be the comparison of results over the integers with results over the reals and complex numbers. Contrasting one theory with the other will help illuminate each, and give deeper understanding to such basic concepts as decidability, definability, computability, and complexity. 53 refs

  17. Phase transition and computational complexity in a stochastic prime number generator

    Energy Technology Data Exchange (ETDEWEB)

    Lacasa, L; Luque, B [Departamento de Matematica Aplicada y EstadIstica, ETSI Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Miramontes, O [Departamento de Sistemas Complejos, Instituto de FIsica, Universidad Nacional Autonoma de Mexico, Mexico 01415 DF (Mexico)], E-mail: lucas@dmae.upm.es

    2008-02-15

    We introduce a prime number generator in the form of a stochastic algorithm. The character of this algorithm gives rise to a continuous phase transition which distinguishes a phase where the algorithm is able to reduce the whole system of numbers into primes and a phase where the system reaches a frozen state with low prime density. In this paper, we firstly present a broader characterization of this phase transition, both in analytical and numerical terms. Critical exponents are calculated, and data collapse is provided. Further on, we redefine the model as a search problem, fitting it in the hallmark of computational complexity theory. We suggest that the system belongs to the class NP. The computational cost is maximal around the threshold, as is common in many algorithmic phase transitions, revealing the presence of an easy-hard-easy pattern. We finally relate the nature of the phase transition to an average-case classification of the problem.

  18. Computation of resonances by two methods involving the use of complex coordinates

    International Nuclear Information System (INIS)

    Bylicki, M.; Nicolaides, C.A.

    1993-01-01

    We have studied two different systems producing resonances, a highly excited multielectron Coulombic negative ion (the He - 2s2p 2 4 P state) and a hydrogen atom in a magnetic field, via the complex-coordinate rotation (CCR) and the state-specific complex-eigenvalue Schroedinger equation (CESE) approaches. For the He - 2s2p 2 4 P resonance, a series of large CCR calculations, up to 353 basis functions with explicit r ij dependence, were carried out to serve as benchmarks. For the magnetic-field problem, the CCR results were taken from the literature. Comparison shows that the state-specific CESE theory allows the physics of the problem to be incorporated systematically while keeping the overall size of the computation tractable regardless of the number of electrons

  19. Experimental and Computational Evidence for the Mechanism of Intradiol Catechol Dioxygenation by Non-Heme Iron(III) Complexes

    Science.gov (United States)

    Jastrzebski, Robin; Quesne, Matthew G; Weckhuysen, Bert M; de Visser, Sam P; Bruijnincx, Pieter C A

    2014-01-01

    Catechol intradiol dioxygenation is a unique reaction catalyzed by iron-dependent enzymes and non-heme iron(III) complexes. The mechanism by which these systems activate dioxygen in this important metabolic process remains controversial. Using a combination of kinetic measurements and computational modelling of multiple iron(III) catecholato complexes, we have elucidated the catechol cleavage mechanism and show that oxygen binds the iron center by partial dissociation of the substrate from the iron complex. The iron(III) superoxide complex that is formed subsequently attacks the carbon atom of the substrate by a rate-determining C=O bond formation step. PMID:25322920

  20. Analysis of Complex Coronary Plaque in Multidetector Computed Tomography: Comparison with Conventional Coronary Angiography

    International Nuclear Information System (INIS)

    Kim, Dong Hyun; Bang, Duck Won; Cho, Yoon Haeng; Suk, Eun Ha

    2011-01-01

    To delineate complex plaque morphology in patients with stable angina using coronary computed tomographic angiography (CTA). 36 patients with complex plaques proven by conventional coronary angiography (CAG), who had taken CTA for evaluation of typical angina, were enrolled in this study. Intravascular ultrasonography (IVUS) was performed in 14 patients (16 lesions). We compared CTA with CAG for plaque features and analyzed vascular cutoff, intraluminal filling defect in a patent vessel, irregularity of plaque, and ulceration. Also, the density of plaque was evaluated on CTA. CAG and CTA showed complex morphology in 44 cases (100%) and 34 cases, (77%), respectively, with features including abrupt vessel cutoff (27 vs. 16%, κ=0.57), intraluminal filling defect (32 vs. 30%, κ=0.77), irregularity (75 vs. 52%, κ=0.52), and ulceration (16 vs. 11%, κ=0.60). CTA indicated that the complex lesions were hypodense (mean 66 ± 21 Houndsfield Units). CTA is a very accurate and useful non-invasive imaging modality for evaluating complex plaque in patients with typical angina.

  1. Analysis of Complex Coronary Plaque in Multidetector Computed Tomography: Comparison with Conventional Coronary Angiography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Hyun [Dept. of Radiology, Soonchunhyang University Hospital, Bucheon (Korea, Republic of); Bang, Duck Won; Cho, Yoon Haeng [Dept. of Internal Medicine, Soonchunhyang University Hospital, Bucheon (Korea, Republic of); Suk, Eun Ha [Dept. of Anesthyesiology and Pain Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2011-04-15

    To delineate complex plaque morphology in patients with stable angina using coronary computed tomographic angiography (CTA). 36 patients with complex plaques proven by conventional coronary angiography (CAG), who had taken CTA for evaluation of typical angina, were enrolled in this study. Intravascular ultrasonography (IVUS) was performed in 14 patients (16 lesions). We compared CTA with CAG for plaque features and analyzed vascular cutoff, intraluminal filling defect in a patent vessel, irregularity of plaque, and ulceration. Also, the density of plaque was evaluated on CTA. CAG and CTA showed complex morphology in 44 cases (100%) and 34 cases, (77%), respectively, with features including abrupt vessel cutoff (27 vs. 16%, {kappa}=0.57), intraluminal filling defect (32 vs. 30%, {kappa}=0.77), irregularity (75 vs. 52%, {kappa}=0.52), and ulceration (16 vs. 11%, {kappa}=0.60). CTA indicated that the complex lesions were hypodense (mean 66 {+-} 21 Houndsfield Units). CTA is a very accurate and useful non-invasive imaging modality for evaluating complex plaque in patients with typical angina.

  2. kipt Terminal Aerodrome Forecast

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — TAF (terminal aerodrome forecast or terminal area forecast) is a format for reporting weather forecast information, particularly as it relates to aviation. TAFs are...

  3. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  4. Computer aided approach to qualitative and quantitative common cause failure analysis for complex systems

    International Nuclear Information System (INIS)

    Cate, C.L.; Wagner, D.P.; Fussell, J.B.

    1977-01-01

    Common cause failure analysis, also called common mode failure analysis, is an integral part of a complete system reliability analysis. Existing methods of computer aided common cause failure analysis are extended by allowing analysis of the complex systems often encountered in practice. The methods aid in identifying potential common cause failures and also address quantitative common cause failure analysis

  5. Field programmable gate array-assigned complex-valued computation and its limits

    Energy Technology Data Exchange (ETDEWEB)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)

    2014-09-15

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  6. The Study of Learners' Preference for Visual Complexity on Small Screens of Mobile Computers Using Neural Networks

    Science.gov (United States)

    Wang, Lan-Ting; Lee, Kun-Chou

    2014-01-01

    The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…

  7. Computation of infinite dilute activity coefficients of binary liquid alloys using complex formation model

    Energy Technology Data Exchange (ETDEWEB)

    Awe, O.E., E-mail: draweoe2004@yahoo.com; Oshakuade, O.M.

    2016-04-15

    A new method for calculating Infinite Dilute Activity Coefficients (γ{sup ∞}s) of binary liquid alloys has been developed. This method is basically computing γ{sup ∞}s from experimental thermodynamic integral free energy of mixing data using Complex formation model. The new method was first used to theoretically compute the γ{sup ∞}s of 10 binary alloys whose γ{sup ∞}s have been determined by experiments. The significant agreement between the computed values and the available experimental values served as impetus for applying the new method to 22 selected binary liquid alloys whose γ{sup ∞}s are either nonexistent or incomplete. In order to verify the reliability of the computed γ{sup ∞}s of the 22 selected alloys, we recomputed the γ{sup ∞}s using three other existing methods of computing or estimating γ{sup ∞}s and then used the γ{sup ∞}s obtained from each of the four methods (the new method inclusive) to compute thermodynamic activities of components of each of the binary systems. The computed activities were compared with available experimental activities. It is observed that the results from the method being proposed, in most of the selected alloys, showed better agreement with experimental activity data. Thus, the new method is an alternative and in certain instances, more reliable approach of computing γ{sup ∞}s of binary liquid alloys.

  8. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  9. Communication complexity and information complexity

    Science.gov (United States)

    Pankratov, Denis

    Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information

  10. [The Psychomat computer complex for psychophysiologic studies].

    Science.gov (United States)

    Matveev, E V; Nadezhdin, D S; Shemsudov, A I; Kalinin, A V

    1991-01-01

    The authors analyze the principles of the design of a computed psychophysiological system for universal uses. Show the effectiveness of the use of computed technology as a combination of universal computation and control potentialities of a personal computer equipped with problem-oriented specialized facilities of stimuli presentation and detection of the test subject's reactions. Define the hardware and software configuration of the microcomputer psychophysiological system "Psychomat". Describe its functional possibilities and the basic medico-technical characteristics. Review organizational issues of the maintenance of its full-scale production.

  11. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  12. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    International Nuclear Information System (INIS)

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-01-01

    We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow

  13. Is Model-Based Development a Favorable Approach for Complex and Safety-Critical Computer Systems on Commercial Aircraft?

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.

  14. Quantum Computing and the Limits of the Efficiently Computable

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I'll discuss how computational complexity---the study of what can and can't be feasibly computed---has been interacting with physics in interesting and unexpected ways. I'll first give a crash course about computer science's P vs. NP problem, as well as about the capabilities and limits of quantum computers. I'll then touch on speculative models of computation that would go even beyond quantum computers, using (for example) hypothetical nonlinearities in the Schrodinger equation. Finally, I'll discuss BosonSampling ---a proposal for a simple form of quantum computing, which nevertheless seems intractable to simulate using a classical computer---as well as the role of computational complexity in the black hole information puzzle.

  15. A programming environment for distributed complex computing. An overview of the Framework for Interdisciplinary Design Optimization (FIDO) project. NASA Langley TOPS exhibit H120b

    Science.gov (United States)

    Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.

    1993-01-01

    The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.

  16. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  17. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    International Nuclear Information System (INIS)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun

    2012-01-01

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  18. Complex Osteotomies of Tibial Plateau Malunions Using Computer-Assisted Planning and Patient-Specific Surgical Guides.

    Science.gov (United States)

    Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P

    2015-08-01

    The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.

  19. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    OpenAIRE

    Karlheinz Schwarz; Rainer Breitling; Christian Allen

    2013-01-01

    Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...

  20. Development of new trends in applied nuclear physics with the use of high-energy braking radiation

    International Nuclear Information System (INIS)

    Dikij, N.P.; Dovbnya, A.N.; Uvarov, V.L.

    2003-01-01

    A review is given about investigation in nuclear medicine,atomic energetics (Chernobyl problem including), geology etc. that carried out in the NSC KIPT mainly during last decade on the basis of home made electron linacs

  1. Communication complexity of distributed computing and a parallel algorithm for polynomial roots

    International Nuclear Information System (INIS)

    Tiwari, P.

    1986-01-01

    The first part of this thesis begins with a discussion of the minimum communication requirements in some distributed networks. The main result is a general technique for determining lower bounds on the communication complexity of problems on various distributed computer networks. This general technique is derived by simulating the general network by a linear array and then using a lower bound on the communication complexity of the problem on the linear array. Applications of this technique yield nontrivial optimal or near-optimal lower bounds on the communication complexity of distinctness, ranking, uniqueness, merging, and triangle detection on a ring, a mesh, and a complete binary tree of processors. A technique similar to the one used in proving the above results, yields interesting graph theoretic results concerning decomposition of a graph into complete bipartite subgraphs. The second part of the this is devoted to the design of a fast parallel algorithm for determining all roots of a polynomial. Given a polynomial rho(z) of degree n with m bit integer coefficients and an integer μ, the author considers the problem of determining all its roots with error less than 2/sup -μ/. It is shown that this problem is in the class NC if rho(z) has all real roots

  2. Improvement of computer complex and interface system for compact nuclear simulator

    International Nuclear Information System (INIS)

    Lee, D. Y.; Park, W. M.; Cha, K. H.; Jung, C. H.; Park, J. C.

    1999-01-01

    CNS(Compact Nuclear Simulator) was developed at the end of 1980s, and have been used as training simulator for staffs of KAERI during 10 years. The operator panel interface cards and the graphic interface cards were designed with special purpose only for CNS. As these interface cards were worn out for 10 years, it was very difficult to get spare parts and to repair them. And the interface cards were damaged by over current happened by shortage of lamp in the operator panel. To solve these problem, the project 'Improvement of Compact Nuclear Simulator' was started from 1997. This paper only introduces about the improvement of computer complex and interface system

  3. Experimental and Computational Evidence for the Mechanism of Intradiol Catechol Dioxygenation by Non- Heme Iron(III) Complexes

    NARCIS (Netherlands)

    Jastrzebski, Robin; Quesne, Matthew G.; Weckhuysen, Bert M.; de Visser, Sam P.; Bruijnincx, Pieter C. A.

    2014-01-01

    Catechol intradiol dioxygenation is a unique reaction catalyzed by iron-dependent enzymes and nonheme iron(III) complexes. The mechanism by which these systems activate dioxygen in this important metabolic process remains controversial. Using a combination of kinetic measurements and computational

  4. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    Science.gov (United States)

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  5. Computation of Collision-Induced Absorption by Simple Molecular Complexes, for Astrophysical Applications

    Science.gov (United States)

    Abel, Martin; Frommhold, Lothar; Li, Xiaoping; Hunt, Katharine L. C.

    2012-06-01

    The interaction-induced absorption by collisional pairs of H{_2} molecules is an important opacity source in the atmospheres of various types of planets and cool stars, such as late stars, low-mass stars, brown dwarfs, cool white dwarf stars, the ambers of the smaller, burnt out main sequence stars, exoplanets, etc., and therefore of special astronomical interest The emission spectra of cool white dwarf stars differ significantly in the infrared from the expected blackbody spectra of their cores, which is largely due to absorption by collisional H{_2}-H{_2}, H{_2}-He, and H{_2}-H complexes in the stellar atmospheres. Using quantum-chemical methods we compute the atmospheric absorption from hundreds to thousands of kelvin. Laboratory measurements of interaction-induced absorption spectra by H{_2} pairs exist only at room temperature and below. We show that our results reproduce these measurements closely, so that our computational data permit reliable modeling of stellar atmosphere opacities even for the higher temperatures. First results for H_2-He complexes have already been applied to astrophysical models have shown great improvements in these models. L. Frommhold, Collision-Induced Absorption in Gases, Cambridge University Press, Cambridge, New York, 1993 and 2006 X. Li, K. L. C. Hunt, F. Wang, M. Abel, and L. Frommhold, Collision-Induced Infrared Absorption by Molecular Hydrogen Pairs at Thousands of Kelvin, Int. J. of Spect., vol. 2010, Article ID 371201, 11 pages, 2010. doi: 10.1155/2010/371201 M. Abel, L. Frommhold, X. Li, and K. L. C. Hunt, Collision-induced absorption by H{_2} pairs: From hundreds to thousands of Kelvin, J. Phys. Chem. A, 115, 6805-6812, 2011} L. Frommhold, M. Abel, F. Wang, M. Gustafsson, X. Li, and K. L. C. Hunt, "Infrared atmospheric emission and absorption by simple molecular complexes, from first principles", Mol. Phys. 108, 2265, 2010 M. Abel, L. Frommhold, X. Li, and K. L. C. Hunt, Infrared absorption by collisional H_2-He complexes

  6. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  7. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  8. Automated a complex computer aided design concept generated using macros programming

    Science.gov (United States)

    Rizal Ramly, Mohammad; Asrokin, Azharrudin; Abd Rahman, Safura; Zulkifly, Nurul Ain Md

    2013-12-01

    Changing a complex Computer Aided design profile such as car and aircraft surfaces has always been difficult and challenging. The capability of CAD software such as AutoCAD and CATIA show that a simple configuration of a CAD design can be easily modified without hassle, but it is not the case with complex design configuration. Design changes help users to test and explore various configurations of the design concept before the production of a model. The purpose of this study is to look into macros programming as parametric method of the commercial aircraft design. Macros programming is a method where the configurations of the design are done by recording a script of commands, editing the data value and adding a certain new command line to create an element of parametric design. The steps and the procedure to create a macro programming are discussed, besides looking into some difficulties during the process of creation and advantage of its usage. Generally, the advantages of macros programming as a method of parametric design are; allowing flexibility for design exploration, increasing the usability of the design solution, allowing proper contained by the model while restricting others and real time feedback changes.

  9. Automated a complex computer aided design concept generated using macros programming

    International Nuclear Information System (INIS)

    Ramly, Mohammad Rizal; Asrokin, Azharrudin; Rahman, Safura Abd; Zulkifly, Nurul Ain Md

    2013-01-01

    Changing a complex Computer Aided design profile such as car and aircraft surfaces has always been difficult and challenging. The capability of CAD software such as AutoCAD and CATIA show that a simple configuration of a CAD design can be easily modified without hassle, but it is not the case with complex design configuration. Design changes help users to test and explore various configurations of the design concept before the production of a model. The purpose of this study is to look into macros programming as parametric method of the commercial aircraft design. Macros programming is a method where the configurations of the design are done by recording a script of commands, editing the data value and adding a certain new command line to create an element of parametric design. The steps and the procedure to create a macro programming are discussed, besides looking into some difficulties during the process of creation and advantage of its usage. Generally, the advantages of macros programming as a method of parametric design are; allowing flexibility for design exploration, increasing the usability of the design solution, allowing proper contained by the model while restricting others and real time feedback changes

  10. Computational Complexity of Combinatorial Surfaces

    NARCIS (Netherlands)

    Vegter, Gert; Yap, Chee K.

    1990-01-01

    We investigate the computational problems associated with combinatorial surfaces. Specifically, we present an algorithm (based on the Brahana-Dehn-Heegaard approach) for transforming the polygonal schema of a closed triangulated surface into its canonical form in O(n log n) time, where n is the

  11. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    Science.gov (United States)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  12. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    International Nuclear Information System (INIS)

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-01-01

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: → Use of e-science methods to search configurationally space. → Automated control of space searching. → Identify key structural features conveying stability. → Improved correlation of computed structures with experimental data.

  13. Computational studies of a paramagnetic planar dibenzotetraaza[14]annulene Ni(II) complex.

    Science.gov (United States)

    Rabaâ, Hassan; Khaledi, Hamid; Olmstead, Marilyn M; Sundholm, Dage

    2015-05-28

    A square-planar Ni(II) dibenzotetraaza[14]annulene complex substituted with two 3,3-dimethylindolenine groups in the meso positions has recently been synthesized and characterized experimentally. In the solid-state, the Ni(II) complex forms linear π-interacting stacks with Ni···Ni separations of 3.448(2) Å. Measurements of the temperature dependence of the magnetic susceptibility revealed a drastic change in the magnetic properties at a temperature of 13 K, indicating a transition from low-to-high spin states. The molecular structures of the free-base ligand, the lowest singlet, and triplet states of the monomer and the dimer of the Ni complex have been studied computationally using density functional theory (DFT) and ab initio correlation levels of theory. In calculations at the second-order Møller-Plesset (MP2) perturbation theory level, a large energy of 260 kcal mol(-1) was obtained for the singlet-triplet splitting, suggesting that an alternative explanation of the observed magnetic properties is needed. The large energy splitting between the singlet and triplet states suggests that the observed change in the magnetism at very low temperatures is due to spin-orbit coupling effects originating from weak interactions between the fine-structure states of the Ni cations in the complex. The lowest electronic excitation energies of the dibenzotetraaza[14]annulene Ni(II) complex calculated at the time-dependent density functional theory (TDDFT) levels are in good agreement with values deduced from the experimental UV-vis spectrum. Calculations at the second-order algebraic-diagrammatic construction (ADC(2)) level on the dimer of the meso-substituted 3,3-dimethylindolenine dibenzotetraaza[14] annulene Ni(II) complex yielded Stokes shifts of 85-100 nm for the lowest excited singlet states. Calculations of the strength of the magnetically induced ring current for the free-base 3,3-dimethylindolenine-substituted dibenzotetraaza[14]annulene show that the annulene

  14. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...

  15. Turing’s algorithmic lens: From computability to complexity theory

    Directory of Open Access Journals (Sweden)

    Díaz, Josep

    2013-12-01

    Full Text Available The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time and NP (problems solvable in non-deterministic polynomial time were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.La cuestión de la decidibilidad, es decir, si es posible demostrar computacionalmente que una expresión matemática es verdadera o falsa, fue planteada por Hilbert y permaneció abierta hasta que Turing la respondió de forma negativa. Establecida la no-decidibilidad de las matemáticas, los esfuerzos en informática teórica se centraron en el estudio de la complejidad computacional de los problemas decidibles. En este artículo presentamos una breve introducción a las clases P (problemas resolubles en tiempo polinómico y NP (problemas resolubles de manera no determinista en tiempo polinómico, al tiempo que exponemos la dificultad de establecer si P = NP y las consecuencias que se derivarían de que ambas clases de problemas fueran iguales. Esta cuestión tiene implicaciones no solo en los campos de la informática, las matemáticas y la física, sino también para la biología, la sociología y la economía. La idea seminal del estudio de la complejidad computacional es consecuencia directa del modo en que Turing abordaba problemas en diferentes ámbitos mediante lo

  16. Designing Computer-Supported Complex Systems Curricula for the Next Generation Science Standards in High School Science Classrooms

    Directory of Open Access Journals (Sweden)

    Susan A. Yoon

    2016-12-01

    Full Text Available We present a curriculum and instruction framework for computer-supported teaching and learning about complex systems in high school science classrooms. This work responds to a need in K-12 science education research and practice for the articulation of design features for classroom instruction that can address the Next Generation Science Standards (NGSS recently launched in the USA. We outline the features of the framework, including curricular relevance, cognitively rich pedagogies, computational tools for teaching and learning, and the development of content expertise, and provide examples of how the framework is translated into practice. We follow this up with evidence from a preliminary study conducted with 10 teachers and 361 students, aimed at understanding the extent to which students learned from the activities. Results demonstrated gains in students’ complex systems understanding and biology content knowledge. In interviews, students identified influences of various aspects of the curriculum and instruction framework on their learning.

  17. Computational Complexity of Bosons in Linear Networks

    Science.gov (United States)

    2017-03-01

    is between one and two orders-of-magnitude more efficient than current heralded multiphoton sources based on spontaneous parametric downconversion...expected to perform tasks intractable for a classical computer, yet requiring minimal non-classical resources as compared to full- scale quantum computers...implementations to date employed sources based on inefficient processes—spontaneous parametric downconversion—that only simulate heralded single

  18. On the Computational Complexity of the Languages of General Symbolic Dynamical Systems and Beta-Shifts

    DEFF Research Database (Denmark)

    Simonsen, Jakob Grue

    2009-01-01

    We consider the computational complexity of languages of symbolic dynamical systems. In particular, we study complexity hierarchies and membership of the non-uniform class P/poly. We prove: 1.For every time-constructible, non-decreasing function t(n)=@w(n), there is a symbolic dynamical system...... with language decidable in deterministic time O(n^2t(n)), but not in deterministic time o(t(n)). 2.For every space-constructible, non-decreasing function s(n)=@w(n), there is a symbolic dynamical system with language decidable in deterministic space O(s(n)), but not in deterministic space o(s(n)). 3.There...... are symbolic dynamical systems having hard and complete languages under @?"m^l^o^g^s- and @?"m^p-reduction for every complexity class above LOGSPACE in the backbone hierarchy (hence, P-complete, NP-complete, coNP-complete, PSPACE-complete, and EXPTIME-complete sets). 4.There are decidable languages of symbolic...

  19. Computer simulations of discharges from a lignite power plant complex

    International Nuclear Information System (INIS)

    Koukouliou, V.; Horyna, J.; Perez-Sanchez, D.

    2008-01-01

    This paper describes work carried out within the IAEA EMRAS program NORM working group to test the predictions of three computer models against measured radionuclide concentrations resulting from discharges from a lignite power plant complex. This complex consists of two power plants with a total of five discharge stacks, situated approximately 2-5 kilometres from a city of approximately 10,000 inhabitants. Monthly measurements of mean wind speed and direction, dust loading, and 238 U activities in fallout samples, as well as mean annual values of 232 Th activity in the nearest city sampling sites were available for the study. The models used in the study were Pc-CREAM (a detailed impact assessment model), and COMPLY and CROM (screening models). In applying the models to this scenario it was noted that the meteorological data provided was not ideal for testing, and that a number of assumptions had to be made, particularly for the simpler models. However, taking the gaps and uncertainties in the data into account, the model predictions from PC-CREAM were generally in good agreement with the measured data, and the results from different models were also generally consistent with each other. However, the COMPLY predictions were generally lower than those from PC-CREAM. This is of concern, as the aim of a screening model (COMPLY) is to provide conservative estimates of contaminant concentrations. Further investigation of this problem is required. The general implications of the results for further model development are discussed. (author)

  20. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  1. The pervasive reach of resource-bounded Kolmogorov complexity in computational complexity theory

    Czech Academy of Sciences Publication Activity Database

    Allender, E.; Koucký, Michal; Ronneburger, D.; Roy, S.

    2011-01-01

    Roč. 77, č. 1 (2011), s. 14-40 ISSN 0022-0000 R&D Projects: GA ČR GAP202/10/0854; GA MŠk(CZ) 1M0545; GA AV ČR IAA100190902 Institutional research plan: CEZ:AV0Z10190503 Keywords : Circuit complexity * Distinguishing complexity * FewEXP * Formula size * Kolmogorov complexity Subject RIV: BA - General Mathematics Impact factor: 1.157, year: 2011 http://www.sciencedirect.com/science/article/pii/S0022000010000887

  2. The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories

    Directory of Open Access Journals (Sweden)

    Robert Logozar

    2011-12-01

    Full Text Available We present the modeling of dynamical systems and finding of their complexity indicators by the use of concepts from computation and information theories, within the framework of J. P. Crutchfield's theory of  ε-machines. A short formal outline of the  ε-machines is given. In this approach, dynamical systems are analyzed directly from the time series that is received from a properly adjusted measuring instrument. The binary strings are parsed through the parse tree, within which morphologically and probabilistically unique subtrees or morphs are recognized as system states. The outline and precise interrelation of the information-theoretic entropies and complexities emanating from the model is given. The paper serves also as a theoretical foundation for the future presentation of the DSA program that implements the  ε-machines modeling up to the stochastic finite automata level.

  3. Design of Computer Experiments

    DEFF Research Database (Denmark)

    Dehlendorff, Christian

    The main topic of this thesis is design and analysis of computer and simulation experiments and is dealt with in six papers and a summary report. Simulation and computer models have in recent years received increasingly more attention due to their increasing complexity and usability. Software...... packages make the development of rather complicated computer models using predefined building blocks possible. This implies that the range of phenomenas that are analyzed by means of a computer model has expanded significantly. As the complexity grows so does the need for efficient experimental designs...... and analysis methods, since the complex computer models often are expensive to use in terms of computer time. The choice of performance parameter is an important part of the analysis of computer and simulation models and Paper A introduces a new statistic for waiting times in health care units. The statistic...

  4. Pentacoordinated organoaluminum complexes: A computational insight

    KAUST Repository

    Milione, Stefano

    2012-12-24

    The geometry and the electronic structure of a series of organometallic pentacoordinated aluminum complexes bearing tri- or tetradentate N,O-based ligands have been investigated with theoretical methods. The BP86, B3LYP, and M06 functionals reproduce with low accuracy the geometry of the selected complexes. The worst result was obtained for the complex bearing a Schiff base ligand with a pendant donor arm, aeimpAlMe2 (aeimp = N-2-(dimethylamino)ethyl-(3,5-di-tert-butyl)salicylaldimine). In particular, the Al-Namine bond distance was unacceptably overestimated. This failure suggests a reasonably flat potential energy surface with respect to Al-N elongation, indicating a weak interaction with probably a strong component of dispersion forces. MP2 and M06-2X methods led to an acceptable value for the same Al-N distance. Better results were obtained with the addition of the dispersion correction to the hybrid B3LYP functional (B3LYP-D). Natural bond orbital analysis revealed that the contribution of the d orbital to the bonding is very small, in agreement with several previous studies of hypervalent molecules. The donation of electronic charge from the ligand to metal mainly consists in the interactions of the lone pairs on the donor atoms of the ligands with the s and p valence orbitals of the aluminum. The covalent bonding of the Al with the coordinated ligand is weak, and the interactions between Al and the coordinated ligands are largely ionic. To further explore the geometrical and electronic factors affecting the formation of these pentacoordianted aluminum complexes, we considered the tetracoordinated complex impAlMe2 (imp = N-isopropyl-(3,5-di-tert-butyl)salicylaldimine)), analogous to aeimpAlMe 2, and we investigated the potential energy surface around the aluminum atom corresponding to the approach of NMe3 to the metal center. At the MP2/6-31G(d) level of theory, a weak attraction was revealed only when NMe3 heads toward the metal center through the directions

  5. Pentacoordinated organoaluminum complexes: A computational insight

    KAUST Repository

    Milione, Stefano; Milano, Giuseppe; Cavallo, Luigi

    2012-01-01

    The geometry and the electronic structure of a series of organometallic pentacoordinated aluminum complexes bearing tri- or tetradentate N,O-based ligands have been investigated with theoretical methods. The BP86, B3LYP, and M06 functionals reproduce with low accuracy the geometry of the selected complexes. The worst result was obtained for the complex bearing a Schiff base ligand with a pendant donor arm, aeimpAlMe2 (aeimp = N-2-(dimethylamino)ethyl-(3,5-di-tert-butyl)salicylaldimine). In particular, the Al-Namine bond distance was unacceptably overestimated. This failure suggests a reasonably flat potential energy surface with respect to Al-N elongation, indicating a weak interaction with probably a strong component of dispersion forces. MP2 and M06-2X methods led to an acceptable value for the same Al-N distance. Better results were obtained with the addition of the dispersion correction to the hybrid B3LYP functional (B3LYP-D). Natural bond orbital analysis revealed that the contribution of the d orbital to the bonding is very small, in agreement with several previous studies of hypervalent molecules. The donation of electronic charge from the ligand to metal mainly consists in the interactions of the lone pairs on the donor atoms of the ligands with the s and p valence orbitals of the aluminum. The covalent bonding of the Al with the coordinated ligand is weak, and the interactions between Al and the coordinated ligands are largely ionic. To further explore the geometrical and electronic factors affecting the formation of these pentacoordianted aluminum complexes, we considered the tetracoordinated complex impAlMe2 (imp = N-isopropyl-(3,5-di-tert-butyl)salicylaldimine)), analogous to aeimpAlMe 2, and we investigated the potential energy surface around the aluminum atom corresponding to the approach of NMe3 to the metal center. At the MP2/6-31G(d) level of theory, a weak attraction was revealed only when NMe3 heads toward the metal center through the directions

  6. Groebner bases for finite-temperature quantum computing and their complexity

    International Nuclear Information System (INIS)

    Crompton, P. R.

    2011-01-01

    Following the recent approach of using order domains to construct Groebner bases from general projective varieties, we examine the parity and time-reversal arguments relating to the Wightman axioms of quantum field theory and propose that the definition of associativity in these axioms should be introduced a posteriori to the cluster property in order to generalize the anyon conjecture for quantum computing to indefinite metrics. We then show that this modification, which we define via ideal quotients, does not admit a faithful representation of the Braid group, because the generalized twisted inner automorphisms that we use to reintroduce associativity are only parity invariant for the prime spectra of the exterior algebra. We then use a coordinate prescription for the quantum deformations of toric varieties to show how a faithful representation of the Braid group can be reconstructed and argue that for a degree reverse lexicographic (monomial) ordered Groebner basis, the complexity class of this problem is bounded quantum polynomial.

  7. Development of computer program ENAUDIBL for computation of the sensation levels of multiple, complex, intrusive sounds in the presence of residual environmental masking noise

    Energy Technology Data Exchange (ETDEWEB)

    Liebich, R. E.; Chang, Y.-S.; Chun, K. C.

    2000-03-31

    The relative audibility of multiple sounds occurs in separate, independent channels (frequency bands) termed critical bands or equivalent rectangular (filter-response) bandwidths (ERBs) of frequency. The true nature of human hearing is a function of a complex combination of subjective factors, both auditory and nonauditory. Assessment of the probability of individual annoyance, community-complaint reaction levels, speech intelligibility, and the most cost-effective mitigation actions requires sensation-level data; these data are one of the most important auditory factors. However, sensation levels cannot be calculated by using single-number, A-weighted sound level values. This paper describes specific steps to compute sensation levels. A unique, newly developed procedure is used, which simplifies and improves the accuracy of such computations by the use of maximum sensation levels that occur, for each intrusive-sound spectrum, within each ERB. The newly developed program ENAUDIBL makes use of ERB sensation-level values generated with some computational subroutines developed for the formerly documented program SPECTRAN.

  8. Composition of complex numbers: Delineating the computational role of the left anterior temporal lobe.

    Science.gov (United States)

    Blanco-Elorrieta, Esti; Pylkkänen, Liina

    2016-01-01

    What is the neurobiological basis of our ability to create complex messages with language? Results from multiple methodologies have converged on a set of brain regions as relevant for this general process, but the computational details of these areas remain to be characterized. The left anterior temporal lobe (LATL) has been a consistent node within this network, with results suggesting that although it rather systematically shows increased activation for semantically complex structured stimuli, this effect does not extend to number phrases such as 'three books.' In the present work we used magnetoencephalography to investigate whether numbers in general are an invalid input to the combinatory operations housed in the LATL or whether the lack of LATL engagement for stimuli such as 'three books' is due to the quantificational nature of such phrases. As a relevant test case, we employed complex number terms such as 'twenty-three', where one number term is not a quantifier of the other but rather, the two terms form a type of complex concept. In a number naming paradigm, participants viewed rows of numbers and depending on task instruction, named them as complex number terms ('twenty-three'), numerical quantifications ('two threes'), adjectival modifications ('blue threes') or non-combinatory lists (e.g., 'two, three'). While quantificational phrases failed to engage the LATL as compared to non-combinatory controls, both complex number terms and adjectival modifications elicited a reliable activity increase in the LATL. Our results show that while the LATL does not participate in the enumeration of tokens within a set, exemplified by the quantificational phrases, it does support conceptual combination, including the composition of complex number concepts. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Statistical and computer analysis for the solvent effect on the elctronis adsorption spectra of monoethanolamine complexes

    International Nuclear Information System (INIS)

    Masoud, M.S.; Motaweh, H.A.; Ali, A.E.

    1999-01-01

    Full text.the electronic absorption spectra of the octahedral complexes containing monoethanolamine were recorded in different solvents (dioxine, chlororm, ethanol, dimethylformamide, dimethylsulfoxide and water). The data analyzed based on multiple linear regression technique using the equation: ya (a is the regression intercept) are various empirical solvent polarytiparameters; constants are calculated using micro statistic program on pc computer. The solvent spectral data of the complexes are compared to that of nugot, the solvent assists the spectral data to be red shifts. In case of Mn (MEA) CL complex, numerous bands are appeared in presence of CHCI DMF and DMSO solvents probably due to the numerous oxidation states. The solvent parameters: E (solvent-solute hydrogen bond and dipolar interaction); (dipolar interaction related to the dielectric constant); M (solute permanent dipole-solvent induced ipole) and N (solute permanent dipole-solvent permanent dipole) are correlated with the structure of the complexes, in hydrogen bonding solvents (Band in case of complexes as the dielectric constant increases, blue shift occurs in due to conjugation with high stability, the data in DMF and DMSO solvents are nearly the same probably due to their similarity

  10. A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix

    International Nuclear Information System (INIS)

    Godfrin, Elena

    1990-01-01

    This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)

  11. Experimental technique for study on three-particle reactions in kinematically total experiments with usage of the two-processor complex on the M-400 computer basis

    International Nuclear Information System (INIS)

    Berezin, F.N.; Kisurin, V.A.; Nemets, O.F.; Ofengenden, R.G.; Pugach, V.M.; Pavlenko, Yu.N.; Patlan', Yu.V.; Savrasov, S.S.

    1981-01-01

    Experimental technique for investigation of three-particle nuclear reactions in kinematically total experiments is described. The technique provides the storage of one-dimensional and two- dimensional energy spectra from several detectors. A block diagram of the measuring system, using this technique, is presented. The measuring system consists of analog equipment for rapid-slow coincidences and of a two-processor complex on the base of the M-400 computer with a general bus. Application of a two-processor complex, each computer of which has a possibility of direct access to memory of another computer, permits to separate functions of data collection and data operational presentation and to perform necessary physical calculations. Software of the measuring complex which includes programs written using the ASSEMBLER language for the first computer and functional programs written using the BASIC language for the second computer, is considered. Software of the first computer includes the DISPETCHER dialog control program, driver package for control of external devices, of applied program package and system modules. The technique, described, is tested in experiment on investigation of d+ 10 B→α+α+α three- particle reaction at deutron energy of 13.6 MeV. The two-dimensional energy spectrum reaction obtained with the help of the technique described is presented [ru

  12. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  13. Computer science approach to quantum control

    International Nuclear Information System (INIS)

    Janzing, D.

    2006-01-01

    Whereas it is obvious that every computation process is a physical process it has hardly been recognized that many complex physical processes bear similarities to computation processes. This is in particular true for the control of physical systems on the nanoscopic level: usually the system can only be accessed via a rather limited set of elementary control operations and for many purposes only a concatenation of a large number of these basic operations will implement the desired process. This concatenation is in many cases quite similar to building complex programs from elementary steps and principles for designing algorithm may thus be a paradigm for designing control processes. For instance, one can decrease the temperature of one part of a molecule by transferring its heat to the remaining part where it is then dissipated to the environment. But the implementation of such a process involves a complex sequence of electromagnetic pulses. This work considers several hypothetical control processes on the nanoscopic level and show their analogy to computation processes. We show that measuring certain types of quantum observables is such a complex task that every instrument that is able to perform it would necessarily be an extremely powerful computer. Likewise, the implementation of a heat engine on the nanoscale requires to process the heat in a way that is similar to information processing and it can be shown that heat engines with maximal efficiency would be powerful computers, too. In the same way as problems in computer science can be classified by complexity classes we can also classify control problems according to their complexity. Moreover, we directly relate these complexity classes for control problems to the classes in computer science. Unifying notions of complexity in computer science and physics has therefore two aspects: on the one hand, computer science methods help to analyze the complexity of physical processes. On the other hand, reasonable

  14. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  15. Electron accelerator shielding design of KIPT neutron source facility

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Zhao Peng; Gohar, Yousry [Argonne National Laboratory, Argonne (United States)

    2016-06-15

    The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nuclear specialists. The biological shield of the accelerator building was designed to reduce the biological dose to less than 5.0e-03 mSv/h during operation. The main source of the biological dose for the accelerator building is the photons and neutrons generated from different interactions of leaked electrons from the electron gun and the accelerator sections with the surrounding components and materials. The Monte Carlo N-particle extended code (MCNPX) was used for the shielding calculations because of its capability to perform electron-, photon-, and neutron-coupled transport simulations. The photon dose was tallied using the MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is very small, ∼0.01 neutron for 100-MeV electron and even smaller for lower-energy electrons. This causes difficulties for the Monte Carlo analyses and consumes tremendous computation resources for tallying the neutron dose outside the shield boundary with an acceptable accuracy. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were utilized for this study. The generated neutrons were banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron dose. The weight windows variance reduction technique was also utilized for both neutron and photon dose

  16. Assignment of solid-state 13C and 1H NMR spectra of paramagnetic Ni(II) acetylacetonate complexes aided by first-principles computations

    DEFF Research Database (Denmark)

    Rouf, Syed Awais; Jakobsen, Vibe Boel; Mareš, Jiří

    2017-01-01

    Recent advances in computational methodology allowed for first-principles calculations of the nuclear shielding tensor for a series of paramagnetic nickel(II) acetylacetonate complexes, [Ni(acac)2L2] with L = H2O, D2O, NH3, ND3, and PMe2Ph have provided detailed insight into the origin of the par......Recent advances in computational methodology allowed for first-principles calculations of the nuclear shielding tensor for a series of paramagnetic nickel(II) acetylacetonate complexes, [Ni(acac)2L2] with L = H2O, D2O, NH3, ND3, and PMe2Ph have provided detailed insight into the origin...

  17. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  18. A computer program for external modes in complex ionic crystals (the rigid molecular-ion model)

    International Nuclear Information System (INIS)

    Chaplot, S.L.

    1978-01-01

    A computer program DISPR has been developed to calculate the external mode phonon dispersion relation in the harmonic approximation for complex ionic crystals using the rigid molecular ion model. A description of the program, the flow diagram and the required input information are given. A sample calculation for α-KNO 3 is presented. The program can handle any type of crystal lattice with any number of atoms and molecules per unit cell with suitable changes in dimension statements. (M.G.B.)

  19. Transition Manifolds of Complex Metastable Systems: Theory and Data-Driven Computation of Effective Dynamics.

    Science.gov (United States)

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-01-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  20. Intraoperative computed tomography with an integrated navigation system in stabilization surgery for complex craniovertebral junction malformation.

    Science.gov (United States)

    Yu, Xinguang; Li, Lianfeng; Wang, Peng; Yin, Yiheng; Bu, Bo; Zhou, Dingbiao

    2014-07-01

    This study was designed to report our preliminary experience with stabilization procedures for complex craniovertebral junction malformation (CVJM) using intraoperative computed tomography (iCT) with an integrated neuronavigation system (NNS). To evaluate the workflow, feasibility and clinical outcome of stabilization procedures using iCT image-guided navigation for complex CVJM. The stabilization procedures in CVJM are complex because of the area's intricate geometry and bony structures, its critical relationship to neurovascular structures and the intricate biomechanical issues involved. A sliding gantry 40-slice computed tomography scanner was installed in a preexisting operating room. The images were transferred directly from the scanner to the NNS using an automated registration system. On the basis of the analysis of intraoperative computed tomographic images, 23 cases (11 males, 12 females) with complicated CVJM underwent navigated stabilization procedures to allow more control over screw placement. The age of these patients were 19-52 years (mean: 33.5 y). We performed C1-C2 transarticular screw fixation in 6 patients to produce atlantoaxial arthrodesis with better reliability. Because of a high-riding transverse foramen on at least 1 side of the C2 vertebra and an anomalous vertebral artery position, 7 patients underwent C1 lateral mass and C2 pedicle screw fixation. Ten additional patients were treated with individualized occipitocervical fixation surgery from the hypoplasia of C1 or constraints due to C2 bone structure. In total, 108 screws were inserted into 23 patients using navigational assistance. The screws comprised 20 C1 lateral mass screws, 26 C2, 14 C3, or 4 C4 pedicle screws, 32 occipital screws, and 12 C1-C2 transarticular screws. There were no vascular or neural complications except for pedicle perforations that were detected in 2 (1.9%) patients and were corrected intraoperatively without any persistent nerves or vessel damage. The overall

  1. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  2. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  3. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  4. Patterns of students' computer use and relations to their computer and information literacy

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe; Gerick, Julia

    2017-01-01

    Background: Previous studies have shown that there is a complex relationship between students’ computer and information literacy (CIL) and their use of information and communication technologies (ICT) for both recreational and school use. Methods: This study seeks to dig deeper into these complex...... relations by identifying different patterns of students’ school-related and recreational computer use in the 21 countries participating in the International Computer and Information Literacy Study (ICILS 2013). Results: Latent class analysis (LCA) of the student questionnaire and performance data from......, raising important questions about differences in contexts. Keywords: ICILS, Computer use, Latent class analysis (LCA), Computer and information literacy....

  5. Measuring Complexity of SAP Systems

    Directory of Open Access Journals (Sweden)

    Ilja Holub

    2016-10-01

    Full Text Available The paper discusses the reasons of complexity rise in ERP system SAP R/3. It proposes a method for measuring complexity of SAP. Based on this method, the computer program in ABAP for measuring complexity of particular SAP implementation is proposed as a tool for keeping ERP complexity under control. The main principle of the measurement method is counting the number of items or relations in the system. The proposed computer program is based on counting of records in organization tables in SAP.

  6. Computation of complexity measures of morphologically significant zones decomposed from binary fractal sets via multiscale convexity analysis

    International Nuclear Information System (INIS)

    Lim, Sin Liang; Koo, Voon Chet; Daya Sagar, B.S.

    2009-01-01

    Multiscale convexity analysis of certain fractal binary objects-like 8-segment Koch quadric, Koch triadic, and random Koch quadric and triadic islands-is performed via (i) morphologic openings with respect to recursively changing the size of a template, and (ii) construction of convex hulls through half-plane closings. Based on scale vs convexity measure relationship, transition levels between the morphologic regimes are determined as crossover scales. These crossover scales are taken as the basis to segment binary fractal objects into various morphologically prominent zones. Each segmented zone is characterized through normalized morphologic complexity measures. Despite the fact that there is no notably significant relationship between the zone-wise complexity measures and fractal dimensions computed by conventional box counting method, fractal objects-whether they are generated deterministically or by introducing randomness-possess morphologically significant sub-zones with varied degrees of spatial complexities. Classification of realistic fractal sets and/or fields according to sub-zones possessing varied degrees of spatial complexities provides insight to explore links with the physical processes involved in the formation of fractal-like phenomena.

  7. Comparison of surface extraction techniques performance in computed tomography for 3D complex micro-geometry dimensional measurements

    DEFF Research Database (Denmark)

    Torralba, Marta; Jiménez, Roberto; Yagüe-Fabra, José A.

    2018-01-01

    micro-geometries as well (i.e., in the sub-mm dimensional range). However, there are different factors that may influence the CT process performance, being one of them the surface extraction technique used. In this paper, two different extraction techniques are applied to measure a complex miniaturized......The number of industrial applications of computed tomography (CT) for dimensional metrology in 100–103 mm range has been continuously increasing, especially in the last years. Due to its specific characteristics, CT has the potential to be employed as a viable solution for measuring 3D complex...... dental file by CT in order to analyze its contribution to the final measurement uncertainty in complex geometries at the mm to sub-mm scales. The first method is based on a similarity analysis: the threshold determination; while the second one is based on a gradient or discontinuity analysis: the 3D...

  8. Influence of the chelator structures on the stability of Re and Tc Tricarbonyl complexes: a computational study

    International Nuclear Information System (INIS)

    Hernández Valdés, Daniel; Rodríguez Riera, Zalua; Jáuregui Haza, Ulises; Díaz García, Alicia; Benoist, Eric

    2016-01-01

    The development of novel radiopharmaceuticals in nuclear medicine based on the M(CO)3 (M = Tc, Re) complexes has attracted great attention1. The versatility of this core and the easy production of the fac-[M(CO)3(H 2 O) 3 ]+ precursor could explain this interest2,3. The main characteristics of these tricarbonyl complexes are a high substitution stability of the three CO ligands and a corresponding lability of the coordinated water molecules, yielding, via easy exchange of a variety of mono-, bi-, and tridentate ligands, complexes of very high kinetic stability. A computational study of different tricarbonyl complexes for Re(I) and Tc(I) has been performed using density functional theory. The solvent effect was simulated using the polarizable continuum model. The fully optimized complexes show geometries that compare favorably with the X-ray data. These structures were used as a starting point to investigate the relative stability of tricarbonyl complexes with various tridentate ligands. They comprise an iminodiacetic acid unit for tridentate coordination to the fac-[M(CO) 3 ]+ moiety (M = Re, Tc), an aromatic ring system bearing a functional group (NO 2 -, NH 2 - and Cl-) as linking site model, and a tethering moiety (methylene, ethylene, propylene butylene or pentylene bridge) between the linking and coordinating sites. In general, Re complexes are more stables than the corresponding Tc complexes. Furthermore, the NH2 functional group, medium length in the carbon chain and meta substitution increase the stability of the complexes. The correlation of these results with the available experimental4 data on these systems allows bringing some understanding of the chemistry of tricarbonyl complexes. (author)

  9. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  10. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.

    2011-01-01

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  11. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  12. Low-complexity computation of plate eigenmodes with Vekua approximations and the method of particular solutions

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent

    2013-11-01

    This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.

  13. Spectro Analytical, Computational and In Vitro Biological Studies of Novel Substituted Quinolone Hydrazone and it's Metal Complexes.

    Science.gov (United States)

    Nagula, Narsimha; Kunche, Sudeepa; Jaheer, Mohmed; Mudavath, Ravi; Sivan, Sreekanth; Ch, Sarala Devi

    2018-01-01

    Some novel transition metal [Cu (II), Ni (II) and Co (II)] complexes of nalidixic acid hydrazone have been prepared and characterized by employing spectro-analytical techniques viz: elemental analysis, 1 H-NMR, Mass, UV-Vis, IR, TGA-DTA, SEM-EDX, ESR and Spectrophotometry studies. The HyperChem 7.5 software was used for geometry optimization of title compound in its molecular and ionic forms. Quantum mechanical parameters, contour maps of highest occupied molecular orbitals (HOMO) and lowest unoccupied molecular orbitals (LUMO) and corresponding binding energy values were computed using semi empirical single point PM3 method. The stoichiometric equilibrium studies of metal complexes carried out spectrophotometrically using Job's continuous variation and mole ratio methods inferred formation of 1:2 (ML 2 ) metal complexes in respective systems. The title compound and its metal complexes screened for antibacterial and antifungal properties, exemplified improved activity in metal complexes. The studies of nuclease activity for the cleavage of CT- DNA and MTT assay for in vitro cytotoxic properties involving metal complexes exhibited high activity. In addition, the DNA binding properties of Cu (II), Ni (II) and Co (II) complexes investigated by electronic absorption and fluorescence measurements revealed their good binding ability and commended agreement of K b values obtained from both the techniques. Molecular docking studies were also performed to find the binding affinity of synthesized compounds with DNA (PDB ID: 1N37) and "Thymidine phosphorylase from E.coli" (PDB ID: 4EAF) protein targets.

  14. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  15. Computer Simulation of Complex Power System Faults under various Operating Conditions

    International Nuclear Information System (INIS)

    Khandelwal, Tanuj; Bowman, Mark

    2015-01-01

    A power system is normally treated as a balanced symmetrical three-phase network. When a fault occurs, the symmetry is normally upset, resulting in unbalanced currents and voltages appearing in the network. For the correct application of protection equipment, it is essential to know the fault current distribution throughout the system and the voltages in different parts of the system due to the fault. There may be situations where protection engineers have to analyze faults that are more complex than simple shunt faults. One type of complex fault is an open phase condition that can result from a fallen conductor or failure of a breaker pole. In the former case, the condition is often accompanied by a fault detectable with normal relaying. In the latter case, the condition may be undetected by standard line relaying. The effect on a generator is dependent on the location of the open phase and the load level. If an open phase occurs between the generator terminals and the high-voltage side of the GSU in the switchyard, and the generator is at full load, damaging negative sequence current can be generated. However, for the same operating condition, an open conductor at the incoming transmission lines located in the switchyard can result in minimal negative sequence current. In 2012, a nuclear power generating station (NPGS) suffered series or open phase fault due to insulator mechanical failure in the 345 kV switchyard. This resulted in both reactor units tripping offline in two separate incidents. Series fault on one of the phases resulted in voltage imbalance that was not detected by the degraded voltage relays. These under-voltage relays did not initiate a start signal to the emergency diesel generators (EDG) because they sensed adequate voltage on the remaining phases exposing a design vulnerability. This paper is intended to help protection engineers calculate complex circuit faults like open phase condition using computer program. The impact of this type of

  16. Complex-plane strategy for computing rotating polytropic models - efficiency and accuracy of the complex first-order perturbation theory

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1988-01-01

    In this paper, a numerical method is developed for determining the structure distortion of a polytropic star which rotates either uniformly or differentially. This method carries out the required numerical integrations in the complex plane. The method is implemented to compute indicative quantities, such as the critical perturbation parameter which represents an upper limit in the rotational behavior of the star. From such indicative results, it is inferred that this method achieves impressive improvement against other relevant methods; most important, it is comparable to some of the most elaborate and accurate techniques on the subject. It is also shown that the use of this method with Chandrasekhar's first-order perturbation theory yields an immediate drastic improvement of the results. Thus, there is no neeed - for most applications concerning rotating polytropic models - to proceed to the further use of the method with higher order techniques, unless the maximum accuracy of the method is required. 31 references

  17. Complex on the base of the ISKRA 226.6 personal computer for nuclear quadrupole resonance signal processing

    International Nuclear Information System (INIS)

    Morgunov, V.G.; Kravchenko, Eh.A.

    1988-01-01

    Complex, designed to conduct investigations by means of nuclear quadrupole resonance (NQR) method, which includes radiospectrometer, multichannel spectrum analyzer and ISKRA 226.6 personal computer, is developed. Analog-to-digital converter (ADC) with buffer storage device, interface and microcomputer are used to process NQR-signals. ADS conversion time is no more, than 50 ns, linearity - 1%. Programs on Fourier analysis of NQR-signals and calculation of relaxation times are developed

  18. Increasing complexity with quantum physics.

    Science.gov (United States)

    Anders, Janet; Wiesner, Karoline

    2011-09-01

    We argue that complex systems science and the rules of quantum physics are intricately related. We discuss a range of quantum phenomena, such as cryptography, computation and quantum phases, and the rules responsible for their complexity. We identify correlations as a central concept connecting quantum information and complex systems science. We present two examples for the power of correlations: using quantum resources to simulate the correlations of a stochastic process and to implement a classically impossible computational task.

  19. Physics analyses of an accelerator-driven sub-critical assembly

    Science.gov (United States)

    Naberezhnev, Dmitry G.; Gohar, Yousry; Bailey, James; Belch, Henry

    2006-06-01

    Physics analyses have been performed for an accelerator-driven sub-critical assembly as a part of the Argonne National Laboratory activity in preparation for a joint conceptual design with the Kharkov Institute of Physics and Technology (KIPT) of Ukraine. KIPT has a plan to construct an accelerator-driven sub-critical assembly targeted towards the medical isotope production and the support of the Ukraine nuclear industry. The external neutron source is produced either through photonuclear reactions in tungsten or uranium targets, or deuteron reactions in a beryllium target. KIPT intends using the high-enriched uranium (HEU) for the fuel of the sub-critical assembly. The main objective of this paper is to study the possibility of utilizing low-enriched uranium (LEU) fuel instead of HEU fuel without penalizing the sub-critical assembly performance, in particular the neutron flux level. In the course of this activity, several studies have been carried out to investigate the main choices for the system's parameters. The external neutron source has been characterized and a pre-conceptual target design has been developed. Several sub-critical configurations with different fuel enrichments and densities have been considered. Based on our analysis, it was shown that the performance of the LEU fuel is comparable with that of the HEU fuel. The LEU fuel sub-critical assembly with 200-MeV electron energy and 100-kW electron beam power has an average total flux of ˜2.50×10 13 n/s cm 2 in the irradiation channels. The corresponding total facility power is ˜204 kW divided into 91 and 113 kW deposited in the target and sub-critical assemblies, respectively.

  20. Computer-aided design system for a complex of problems on calculation and analysis of engineering and economical indexes of NPP power units

    International Nuclear Information System (INIS)

    Stepanov, V.I.; Koryagin, A.V.; Ruzankov, V.N.

    1988-01-01

    Computer-aided design system for a complex of problems concerning calculation and analysis of engineering and economical indices of NPP power units is described. In the system there are means for automated preparation and debugging of data base software complex, which realizes th plotted algorithm in the power unit control system. Besides, in the system there are devices for automated preparation and registration of technical documentation

  1. Complex Odontoma: A Case Report with Micro-Computed Tomography Findings

    Directory of Open Access Journals (Sweden)

    L. A. N. Santos

    2016-01-01

    Full Text Available Odontomas are the most common benign tumors of odontogenic origin. They are normally diagnosed on routine radiographs, due to the absence of symptoms. Histopathologic evaluation confirms the diagnosis especially in cases of complex odontoma, which may be confused during radiographic examination with an osteoma or other highly calcified bone lesions. The micro-CT is a new technology that enables three-dimensional analysis with better spatial resolution compared with cone beam computed tomography. Another great advantage of this technology is that the sample does not need special preparation or destruction in the sectioned area as in histopathologic evaluation. An odontoma with CBCT and microtomography images is presented in a 26-year-old man. It was first observed on panoramic radiographs and then by CBCT. The lesion and the impacted third molar were surgically excised using a modified Neumann approach. After removal, it was evaluated by histopathology and microtomography to confirm the diagnostic hypothesis. According to the results, micro-CT enabled the assessment of the sample similar to histopathology, without destruction of the sample. With further development, micro-CT could be a powerful diagnostic tool in future research.

  2. Computer simulations of dendrimer-polyelectrolyte complexes.

    Science.gov (United States)

    Pandav, Gunja; Ganesan, Venkat

    2014-08-28

    We carry out a systematic analysis of static properties of the clusters formed by complexation between charged dendrimers and linear polyelectrolyte (LPE) chains in a dilute solution under good solvent conditions. We use single chain in mean-field simulations and analyze the structure of the clusters through radial distribution functions of the dendrimer, cluster size, and charge distributions. The effects of LPE length, charge ratio between LPE and dendrimer, the influence of salt concentration, and the dendrimer generation number are examined. Systems with short LPEs showed a reduced propensity for aggregation with dendrimers, leading to formation of smaller clusters. In contrast, larger dendrimers and longer LPEs lead to larger clusters with significant bridging. Increasing salt concentration was seen to reduce aggregation between dendrimers as a result of screening of electrostatic interactions. Generally, maximum complexation was observed in systems with an equal amount of net dendrimer and LPE charges, whereas either excess LPE or dendrimer concentrations resulted in reduced clustering between dendrimers.

  3. Untangling the complexity of blood coagulation network: use of computational modelling in pharmacology and diagnostics.

    Science.gov (United States)

    Shibeko, Alexey M; Panteleev, Mikhail A

    2016-05-01

    Blood coagulation is a complex biochemical network that plays critical roles in haemostasis (a physiological process that stops bleeding on injury) and thrombosis (pathological vessel occlusion). Both up- and down-regulation of coagulation remain a major challenge for modern medicine, with the ultimate goal to correct haemostasis without causing thrombosis and vice versa. Mathematical/computational modelling is potentially an important tool for understanding blood coagulation disorders and their treatment. It can save a huge amount of time and resources, and provide a valuable alternative or supplement when clinical studies are limited, or not ethical, or technically impossible. This article reviews contemporary state of the art in the modelling of blood coagulation for practical purposes: to reveal the molecular basis of a disease, to understand mechanisms of drug action, to predict pharmacodynamics and drug-drug interactions, to suggest potential drug targets or to improve quality of diagnostics. Different model types and designs used for this are discussed. Functional mechanisms of procoagulant bypassing agents and investigations of coagulation inhibitors were the two particularly popular applications of computational modelling that gave non-trivial results. Yet, like any other tool, modelling has its limitations, mainly determined by insufficient knowledge of the system, uncertainty and unreliability of complex models. We show how to some extent this can be overcome and discuss what can be expected from the mathematical modelling of coagulation in not-so-far future. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. On the interconnection of stable protein complexes: inter-complex hubs and their conservation in Saccharomyces cerevisiae and Homo sapiens networks.

    Science.gov (United States)

    Guerra, Concettina

    2015-01-01

    Protein complexes are key molecular entities that perform a variety of essential cellular functions. The connectivity of proteins within a complex has been widely investigated with both experimental and computational techniques. We developed a computational approach to identify and characterise proteins that play a role in interconnecting complexes. We computed a measure of inter-complex centrality, the crossroad index, based on disjoint paths connecting proteins in distinct complexes and identified inter-complex hubs as proteins with a high value of the crossroad index. We applied the approach to a set of stable complexes in Saccharomyces cerevisiae and in Homo sapiens. Just as done for hubs, we evaluated the topological and biological properties of inter-complex hubs addressing the following questions. Do inter-complex hubs tend to be evolutionary conserved? What is the relation between crossroad index and essentiality? We found a good correlation between inter-complex hubs and both evolutionary conservation and essentiality.

  5. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  6. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    Science.gov (United States)

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  7. Quantum chemical investigation of levofloxacin-boron complexes: A computational approach

    Science.gov (United States)

    Sayin, Koray; Karakaş, Duran

    2018-04-01

    Quantum chemical calculations are performed over some boron complexes with levofloxacin. Boron complex with fluorine atoms are optimized at three different methods (HF, B3LYP and M062X) with 6-31 + G(d) basis set. The best level is determined as M062X/6-31 + G(d) by comparison of experimental and calculated results of complex (1). The other complexes are optimized by using the best level. Structural properties, IR and NMR spectrum are examined in detail. Biological activities of mentioned complexes are investigated by some quantum chemical descriptors and molecular docking analyses. As a result, biological activities of complex (2) and (4) are close to each other and higher than those of other complexes. Additionally, NLO properties of mentioned complexes are investigated by some quantum chemical parameters. It is found that complex (3) is the best candidate for NLO applications.

  8. Alpha complexes in protein structure prediction

    DEFF Research Database (Denmark)

    Winter, Pawel; Fonseca, Rasmus

    2015-01-01

    Reducing the computational effort and increasing the accuracy of potential energy functions is of utmost importance in modeling biological systems, for instance in protein structure prediction, docking or design. Evaluating interactions between nonbonded atoms is the bottleneck of such computations......-complexes from scratch for every configuration encountered during the search for the native structure would make this approach hopelessly slow. However, it is argued that kinetic a-complexes can be used to reduce the computational effort of determining the potential energy when "moving" from one configuration...... to a neighboring one. As a consequence, relatively expensive (initial) construction of an a-complex is expected to be compensated by subsequent fast kinetic updates during the search process. Computational results presented in this paper are limited. However, they suggest that the applicability of a...

  9. Equation-free and variable free modeling for complex/multiscale systems. Coarse-grained computation in science and engineering using fine-grained models

    Energy Technology Data Exchange (ETDEWEB)

    Kevrekidis, Ioannis G. [Princeton Univ., NJ (United States)

    2017-02-01

    The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.

  10. Measurements of LET Spectra of the JINR Phasotron Radiotherapy Proton Beam

    Czech Academy of Sciences Publication Activity Database

    Kubančák, Ján; Molokanov, A. G.

    2013-01-01

    Roč. 2013, č. 6 (2013), s. 90-92 ISSN 1562-6016 R&D Projects: GA MŠk LA08002 Institutional support: RVO:61389005 Keywords : LET spectra * proton beam Subject RIV: BO - Biophysics Impact factor: 0.102, year: 2013 http:// vant .kipt.kharkov.ua/ARTICLE/ VANT _2013_6/article_2013_6_90.pdf

  11. Effects of complex feedback on computer-assisted modular instruction

    NARCIS (Netherlands)

    Gordijn, Jan; Nijhof, W.J.

    2002-01-01

    The aim of this study is to determine the effects of two versions of Computer-Based Feedback within a prevocational system of modularized education in The Netherlands. The implementation and integration of Computer-Based Feedback (CBF) in Installation Technology modules in all schools (n=60) in The

  12. Complex fluids in biological systems experiment, theory, and computation

    CERN Document Server

    2015-01-01

    This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements. With ever-increasing capabilities in the visualization and manipulation of biological systems, research on the fundamental phenomena, models, measurements, and analysis of complex fluids has taken a number of exciting directions. In this book, many of the world’s foremost experts explore key topics such as: Macro- and micro-rheological techniques for measuring the material properties of complex biofluids and the subtleties of data interpretation Experimental observations and rheology of complex biological materials, including mucus, cell membranes, the cytoskeleton, and blood The motility of microorganisms in complex fluids and the dynamics of active suspensions Challenges and solut...

  13. Efforts to transform computers reach milestone

    CERN Multimedia

    Johnson, G

    2001-01-01

    Scientists in San Jose, Californina, have performed the most complex calculation ever using a quantum computer - factoring the number 15. In contast to the switches in conventional computers, which although tiny consist of billions of atoms, quantum computations are carried out by manipulating single atoms. The laws of quantum mechanics which govern these actions in fact mean that multiple computations could be done in parallel, this would drastically cut down the time needed to carry out very complex calculations.

  14. Computer simulation of complexity in plasmas

    International Nuclear Information System (INIS)

    Hayashi, Takaya; Sato, Tetsuya

    1998-01-01

    By making a comprehensive comparative study of many self-organizing phenomena occurring in magnetohydrodynamics and kinetic plasmas, we came up with a hypothetical grand view of self-organization. This assertion is confirmed by a recent computer simulation for a broader science field, specifically, the structure formation of short polymer chains, where the nature of the interaction is completely different from that of plasmas. It is found that the formation of the global orientation order proceeds stepwise. (author)

  15. 3-Dimensional computed tomography imaging of the ring-sling complex with non-operative survival case in a 10-year-old female

    OpenAIRE

    Fukuda, Hironobu; Imataka, George; Drago, Fabrizio; Maeda, Kosaku; Yoshihara, Shigemi

    2017-01-01

    We report a case of a 10-year-old female patient who survived ring-sling complex without surgery. The patient had congenital wheezing from the neonatal period and was treated after a tentative diagnosis of infantile asthma. The patient suffered from allergy and was hospitalized several times due to severe wheezing, and when she was 22 months old, she was diagnosed with ring-sling complex. We used a segmental 4 mm internal diameter of the trachea for 3-dimensional computed tomography (3D-CT). ...

  16. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  17. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  18. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    Directory of Open Access Journals (Sweden)

    Karlheinz Schwarz

    2013-09-01

    Full Text Available Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...

  19. Computational design of RNAs with complex energy landscapes.

    Science.gov (United States)

    Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F

    2013-12-01

    RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. Copyright © 2013 Wiley Periodicals, Inc.

  20. Epidemic modeling in complex realities.

    Science.gov (United States)

    Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

    2007-04-01

    In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

  1. Proceedings of the meeting on computational and experimental studies for modeling of radionuclide migration in complex aquatic ecosystems

    International Nuclear Information System (INIS)

    Matsunaga, Takeshi; Hakanson, Lars

    2010-09-01

    The Research Group for Environmental Science of JAEA held a meeting on computational and experimental studies for modeling of radionuclide migration in complex aquatic ecosystems during November 16-20 of 2009. The aim was to discuss the relevance of various computational and experimental approaches to that modeling. The meeting was attended by a Swedish researcher, Prof. Dr. Lars Hakanson of Uppsala University. It included a joint talk at the Institute for Environmental Sciences, in addition to a field and facility survey of the JNFL commercial reprocessing plant located in Rokkasho, Aomori. The meeting demonstrated that it is crucial 1) to make a model structure be strictly relevant to the target objectives of a study and 2) to account for inherent fluctuations in target values in nature in a manner of qualitative parameterization. Moreover, it was confirmed that there can be multiple different approaches of modeling (e.g. detailed or simplified) with relevance for the objectives of a study. These discussions should be considered in model integration for complex aquatic ecosystems consisting catchments, rivers, lakes and coastal oceans which can interact with the atmosphere. This report compiles research subjects and lectures presented at the meeting with associated discussions. The 10 of the presented papers indexed individually. (J.P.N.)

  2. Annual Performance Assessment of Complex Fenestration Systems in Sunny Climates Using Advanced Computer Simulations

    Directory of Open Access Journals (Sweden)

    Chantal Basurto

    2015-12-01

    Full Text Available Complex Fenestration Systems (CFS are advanced daylighting systems that are placed on the upper part of a window to improve the indoor daylight distribution within rooms. Due to their double function of daylight redirection and solar protection, they are considered as a solution to mitigate the unfavorable effects due to the admission of direct sunlight in buildings located in prevailing sunny climates (risk of glare and overheating. Accordingly, an adequate assessment of their performance should include an annual evaluation of the main aspects relevant to the use of daylight in such regions: the indoor illuminance distribution, thermal comfort, and visual comfort of the occupant’s. Such evaluation is possible with the use of computer simulations combined with the bi-directional scattering distribution function (BSDF data of these systems. This study explores the use of available methods to assess the visible and thermal annual performance of five different CFS using advanced computer simulations. To achieve results, an on-site daylight monitoring was carried out in a building located in a predominantly sunny climate location, and the collected data was used to create and calibrate a virtual model used to carry-out the simulations. The results can be employed to select the CFS, which improves visual and thermal interior environment for the occupants.

  3. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  4. Computational prediction of binding affinity for CYP1A2-ligand complexes using empirical free energy calculations

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Olsen, Lars; Jørgensen, Flemming Steen

    2010-01-01

    , and methods based on statistical mechanics. In the present investigation, we started from an LIE model to predict the binding free energy of structurally diverse compounds of cytochrome P450 1A2 ligands, one of the important human metabolizing isoforms of the cytochrome P450 family. The data set includes both...... substrates and inhibitors. It appears that the electrostatic contribution to the binding free energy becomes negligible in this particular protein and a simple empirical model was derived, based on a training set of eight compounds. The root mean square error for the training set was 3.7 kJ/mol. Subsequent......Predicting binding affinities for receptor-ligand complexes is still one of the challenging processes in computational structure-based ligand design. Many computational methods have been developed to achieve this goal, such as docking and scoring methods, the linear interaction energy (LIE) method...

  5. Complex Functions with GeoGebra

    Science.gov (United States)

    Breda, Ana Maria D'azevedo; Dos Santos, José Manuel Dos Santos

    2016-01-01

    Complex functions, generally feature some interesting peculiarities, seen as extensions of real functions. The visualization of complex functions properties usually requires the simultaneous visualization of two-dimensional spaces. The multiple Windows of GeoGebra, combined with its ability of algebraic computation with complex numbers, allow the…

  6. Foundations of Neuromorphic Computing

    Science.gov (United States)

    2013-05-01

    paradigms: few sensors/complex computations and many sensors/simple computation. Challenges with Nano-enabled Neuromorphic Chips A wide variety of...FOUNDATIONS OF NEUROMORPHIC COMPUTING MAY 2013 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...2009 – SEP 2012 4. TITLE AND SUBTITLE FOUNDATIONS OF NEUROMORPHIC COMPUTING 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER N/A 5c. PROGRAM

  7. Dynamics of the edge transport barrier at plasma biasing on the CASTOR tokamak

    Czech Academy of Sciences Publication Activity Database

    Stöckel, Jan; Spolaore, M.; Peleman, P.; Brotánková, Jana; Horáček, Jan; Dejarnac, Renaud; Devynck, P.; Ďuran, Ivan; Gunn, J. P.; Hron, Martin; Kocan, M.; Martines, E.; Pánek, Radomír; Sharma, A.; Van Oost, G.

    2006-01-01

    Roč. 12, č. 6 (2006), s. 19-23 ISSN 1562-6016. [International Conference on Plasma Physics and Technology/11th./. Alushta, 11.9.2006-16.9.2006] Institutional research plan: CEZ:AV0Z20430508 Keywords : tokamak * plasma * transport barrier * relaxations Subject RIV: BL - Plasma and Gas Discharge Physics http:// vant .kipt.kharkov.ua/TABFRAME.html

  8. Role of turbulence and electric fields in the establishment of improved confinement in tokamak plasmas

    Czech Academy of Sciences Publication Activity Database

    Van Oost, G.; Bulanin, V.V.; Donné, A.J.H.; Gusakov, E.Z.; Krämer-Flecken, A.; Krupnik, L.I.; Melnikov, A.; Peleman, P.; Razumova, K.; Stöckel, Jan; Vershkov, V.; Altukov, A.B.; Andreev, V.F.; Askinazi, L.G.; Bondarenko, I.S.; Dnestrovskij, A.Yu.; Eliseev, L.G.; Esipov, L.A.; Grashin, S.A.; Gurchenko, A.D.; Hogeweij, G.M.D.; Jachmin, S.; Khrebtov, S.M.; Kouprienko, D.V.; Lysenko, S.E.; Perfilov, S.V.; Petrov, A.V.; Popov, A.Yu.; Reiser, D.; Soldatov, S.; Stepanov, A.Yu.; Telesca, G.; Urazbaev, A.O.; Verdoolaege, G.; Zimmermann, O.

    2006-01-01

    Roč. 12, č. 6 (2006), s. 14-19 ISSN 1562-6016. [International Conference on Plasma Physics and Technology/11th./. Alushta, 11.9.2006-16.9.2006] Institutional research plan: CEZ:AV0Z20430508 Keywords : tokamak * plasma * improved confinement * turbulence Subject RIV: BL - Plasma and Gas Discharge Physics http:// vant .kipt.kharkov.ua/TABFRAME.html

  9. Analysis and computer simulation for transient flow in complex system of liquid piping

    International Nuclear Information System (INIS)

    Mitry, A.M.

    1985-01-01

    This paper is concerned with unsteady state analysis and development of a digital computer program, FLUTRAN, that performs a simulation of transient flow behavior in a complex system of liquid piping. The program calculates pressure and flow transients in the liquid filled piping system. The analytical model is based on the method of characteristics solution to the fluid hammer continuity and momentum equations. The equations are subject to wide variety of boundary conditions to take into account the effect of hydraulic devices. Water column separation is treated as a boundary condition with known head. Experimental tests are presented that exhibit transients induced by pump failure and valve closure in the McGuire Nuclear Station Low Level Intake Cooling Water System. Numerical simulation is conducted to compare theory with test data. Analytical and test data are shown to be in good agreement and provide validation of the model

  10. Experimental and computational fluid dynamics studies of mixing of complex oral health products

    Science.gov (United States)

    Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team

    2017-11-01

    Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).

  11. Experiment and computation: a combined approach to study the van der Waals complexes

    Directory of Open Access Journals (Sweden)

    Surin L.A.

    2017-01-01

    Full Text Available A review of recent results on the millimetre-wave spectroscopy of weakly bound van der Waals complexes, mostly those which contain H2 and He, is presented. In our work, we compared the experimental spectra to the theoretical bound state results, thus providing a critical test of the quality of the M–H2 and M–He potential energy surfaces (PESs which are a key issue for reliable computations of the collisional excitation and de-excitation of molecules (M = CO, NH3, H2O in the dense interstellar medium. The intermolecular interactions with He and H2 play also an important role for high resolution spectroscopy of helium or para-hydrogen clusters doped by a probe molecule (CO, HCN. Such experiments are directed on the detection of superfluid response of molecular rotation in the He and p-H2 clusters.

  12. Retrieving the aerosol complex refractive index using PyMieScatt: A Mie computational package with visualization capabilities

    Science.gov (United States)

    Sumlin, Benjamin J.; Heinson, William R.; Chakrabarty, Rajan K.

    2018-01-01

    The complex refractive index m = n + ik of a particle is an intrinsic property which cannot be directly measured; it must be inferred from its extrinsic properties such as the scattering and absorption cross-sections. Bohren and Huffman called this approach "describing the dragon from its tracks", since the inversion of Lorenz-Mie theory equations is intractable without the use of computers. This article describes PyMieScatt, an open-source module for Python that contains functionality for solving the inverse problem for complex m using extensive optical and physical properties as input, and calculating regions where valid solutions may exist within the error bounds of laboratory measurements. Additionally, the module has comprehensive capabilities for studying homogeneous and coated single spheres, as well as ensembles of homogeneous spheres with user-defined size distributions, making it a complete tool for studying the optical behavior of spherical particles.

  13. Quantum Cybernetics and Complex Quantum Systems Science - A Quantum Connectionist Exploration

    OpenAIRE

    Gonçalves, Carlos Pedro

    2014-01-01

    Quantum cybernetics and its connections to complex quantum systems science is addressed from the perspective of complex quantum computing systems. In this way, the notion of an autonomous quantum computing system is introduced in regards to quantum artificial intelligence, and applied to quantum artificial neural networks, considered as autonomous quantum computing systems, which leads to a quantum connectionist framework within quantum cybernetics for complex quantum computing systems. Sever...

  14. Adiabatic quantum computation

    Science.gov (United States)

    Albash, Tameem; Lidar, Daniel A.

    2018-01-01

    Adiabatic quantum computing (AQC) started as an approach to solving optimization problems and has evolved into an important universal alternative to the standard circuit model of quantum computing, with deep connections to both classical and quantum complexity theory and condensed matter physics. This review gives an account of the major theoretical developments in the field, while focusing on the closed-system setting. The review is organized around a series of topics that are essential to an understanding of the underlying principles of AQC, its algorithmic accomplishments and limitations, and its scope in the more general setting of computational complexity theory. Several variants are presented of the adiabatic theorem, the cornerstone of AQC, and examples are given of explicit AQC algorithms that exhibit a quantum speedup. An overview of several proofs of the universality of AQC and related Hamiltonian quantum complexity theory is given. Considerable space is devoted to stoquastic AQC, the setting of most AQC work to date, where obstructions to success and their possible resolutions are discussed.

  15. Density functionalized [RuII(NO)(Salen)(Cl)] complex: Computational photodynamics and in vitro anticancer facets.

    Science.gov (United States)

    Mir, Jan Mohammad; Jain, N; Jaget, P S; Maurya, R C

    2017-09-01

    Photodynamic therapy (PDT) is a treatment that uses photosensitizing agents to kill cancer cells. Scientific community has been eager for decades to design an efficient PDT drug. Under such purview, the current report deals with the computational photodynamic behavior of ruthenium(II) nitrosyl complex containing N, N'-salicyldehyde-ethylenediimine (SalenH 2 ), the synthesis and X-ray crystallography of which is already known [Ref. 38,39]. Gaussian 09W software package was employed to carry out the density functional (DFT) studies. DFT calculations with Becke-3-Lee-Yang-Parr (B3LYP)/Los Alamos National Laboratory 2 Double Z (LanL2DZ) specified for Ru atom and B3LYP/6-31G(d,p) combination for all other atoms were used using effective core potential method. Both, the ground and excited states of the complex were evolved. Some known photosensitizers were compared with the target complex. Pthalocyanine and porphyrin derivatives were the compounds selected for the respective comparative study. It is suggested that effective photoactivity was found due to the presence of ruthenium core in the model complex. In addition to the evaluation of theoretical aspects in vitro anticancer aspects against COLO-205 human cancer cells have also been carried out with regard to the complex. More emphasis was laid to extrapolate DFT to depict the chemical power of the target compound to release nitric oxide. A promising visible light triggered nitric oxide releasing power of the compound has been inferred. In vitro antiproliferative studies of [RuCl 3 (PPh 3 ) 3 ] and [Ru(NO)(Salen)(Cl)] have revealed the model complex as an excellent anticancer agent. From IC 50 values of 40.031mg/mL in former and of 9.74mg/mL in latter, it is established that latter bears more anticancer potentiality. From overall study the DFT based structural elucidation and the efficiency of NO, Ru and Salen co-ligands has shown promising drug delivery property and a good candidacy for both chemotherapy as well as

  16. The BioIntelligence Framework: a new computational platform for biomedical knowledge computing.

    Science.gov (United States)

    Farley, Toni; Kiefer, Jeff; Lee, Preston; Von Hoff, Daniel; Trent, Jeffrey M; Colbourn, Charles; Mousses, Spyro

    2013-01-01

    Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information.

  17. Designing with computational intelligence

    CERN Document Server

    Lopes, Heitor; Mourelle, Luiza

    2017-01-01

    This book discusses a number of real-world applications of computational intelligence approaches. Using various examples, it demonstrates that computational intelligence has become a consolidated methodology for automatically creating new competitive solutions to complex real-world problems. It also presents a concise and efficient synthesis of different systems using computationally intelligent techniques.

  18. Usage of super high speed computer for clarification of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu; Sato, Mitsuhisa; Nakata, Hideki; Tatebe, Osami; Takagi, Hiromitsu

    1999-01-01

    This study aims at construction of an efficient super high speed computer system application environment in response to parallel distributed system with easy transplantation to different computer system and different number by conducting research and development on super high speed computer application technology required for elucidation of complicated phenomenon in elucidation of complicated phenomenon of nuclear power field due to computed scientific method. In order to realize such environment, the Electrotechnical Laboratory has conducted development on Ninf, a network numerical information library. This Ninf system can supply a global network infrastructure for worldwide computing with high performance on further wide range distributed network (G.K.)

  19. Utility Computing: Reality and Beyond

    Science.gov (United States)

    Ivanov, Ivan I.

    Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?

  20. Computing Hypercrossed Complex Pairings in Digital Images

    Directory of Open Access Journals (Sweden)

    Simge Öztunç

    2013-01-01

    Full Text Available We consider an additive group structure in digital images and introduce the commutator in digital images. Then we calculate the hypercrossed complex pairings which generates a normal subgroup in dimension 2 and in dimension 3 by using 8-adjacency and 26-adjacency.

  1. THE DESIGNING OF ELECTRONIC TEACHING-METHODS COMPLEX «GRAPHICS» FOR REALIZATION OF COMPUTER-BASED LEARNING OF ENGINEERING-GRAPHIC DISCIPLINES

    Directory of Open Access Journals (Sweden)

    Іван Нищак

    2015-12-01

    Full Text Available The article contains Theoretical Foundations of designing of author’s electronic educational-methodical complex (EEMC «Graphics», intended to implement the engineering-graphic preparation of future teachers of technology in terms of computer-based learning. The process of designing of electronic educational-methodical complex “Graphics” includes the following successive stages: 1 identification of didactic goals and objectives; 2the designing of patterns of EEMC; 3 the selection of contents and systematization of educational material; 4 the program-technical implementation of EEMC; 5 interface design; 6 expert assessment of quality of EEMC; 7 testing of EEMC; 8 adjusting the software; 9 the development of guidelines and instructions for the use of EEMC.

  2. Communication complexity reduction from globally uncorrelated states

    International Nuclear Information System (INIS)

    Wieśniak, Marcin

    2015-01-01

    Bell inequality violating entangled states are the working horse for many potential quantum information processing applications, including secret sharing, cryptographic key distribution and communication complexity reduction in distributed computing. Here we explicitly demonstrate the power of certain multi-qubit states to improve the efficiency of partners in joint computation of some multi-qubit function, despite the fact that there could be no correlations between all distributed particles. It is important to stress that the class of functions that can be computed more efficiently is widened, as compared with the standard Bell inequalities. - Highlights: • We expand the set of functions, which can be computed more efficiently with quantum states. • We describe communication complexity reduction protocols based not only on full correlations. • We explicitly show an instance where, a globally uncorrelated state reduces communication complexity

  3. Communication complexity reduction from globally uncorrelated states

    Energy Technology Data Exchange (ETDEWEB)

    Wieśniak, Marcin, E-mail: marcin.wiesniak@univie.ac.at

    2015-04-03

    Bell inequality violating entangled states are the working horse for many potential quantum information processing applications, including secret sharing, cryptographic key distribution and communication complexity reduction in distributed computing. Here we explicitly demonstrate the power of certain multi-qubit states to improve the efficiency of partners in joint computation of some multi-qubit function, despite the fact that there could be no correlations between all distributed particles. It is important to stress that the class of functions that can be computed more efficiently is widened, as compared with the standard Bell inequalities. - Highlights: • We expand the set of functions, which can be computed more efficiently with quantum states. • We describe communication complexity reduction protocols based not only on full correlations. • We explicitly show an instance where, a globally uncorrelated state reduces communication complexity.

  4. [Animal experimentation, computer simulation and surgical research].

    Science.gov (United States)

    Carpentier, Alain

    2009-11-01

    We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.

  5. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  6. Complexity Plots

    KAUST Repository

    Thiyagalingam, Jeyarajan

    2013-06-01

    In this paper, we present a novel visualization technique for assisting the observation and analysis of algorithmic complexity. In comparison with conventional line graphs, this new technique is not sensitive to the units of measurement, allowing multivariate data series of different physical qualities (e.g., time, space and energy) to be juxtaposed together conveniently and consistently. It supports multivariate visualization as well as uncertainty visualization. It enables users to focus on algorithm categorization by complexity classes, while reducing visual impact caused by constants and algorithmic components that are insignificant to complexity analysis. It provides an effective means for observing the algorithmic complexity of programs with a mixture of algorithms and black-box software through visualization. Through two case studies, we demonstrate the effectiveness of complexity plots in complexity analysis in research, education and application. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  7. Complexity hints for economic policy

    CERN Document Server

    Salzano, Massimo

    2007-01-01

    This volume extends the complexity approach to economics. This complexity approach is not a completely new way of doing economics, and that it is a replacement for existing economics, but rather the integration of some new analytic and computational techniques into economists’ bag of tools. It provides some alternative pattern generators, which can supplement existing approaches by providing an alternative way of finding patterns than be obtained by the traditional scientific approach. On this new kind of policy hints can be obtained. The reason why the complexity approach is taking hold now in economics is because the computing technology has advanced. This advance allows consideration of analytical systems that could not previously be considered by economists. Consideration of these systems suggested that the results of the "control-based" models might not extend easily to more complicated systems, and that we now have a method—piggybacking computer assisted analysis onto analytic methods—to start gen...

  8. Fast algorithm for computing complex number-theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  9. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  10. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  11. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    International Nuclear Information System (INIS)

    Brown, D.L.

    2009-01-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  12. Computer Program for Calculation of Complex Chemical Equilibrium Compositions, Rocket Performance, Incident and Reflected Shocks, and Chapman-Jouguet Detonations. Interim Revision, March 1976

    Science.gov (United States)

    Gordon, S.; Mcbride, B. J.

    1976-01-01

    A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.

  13. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  14. Software Alchemy: Turning Complex Statistical Computations into Embarrassingly-Parallel Ones

    Directory of Open Access Journals (Sweden)

    Norman Matloff

    2016-07-01

    Full Text Available The growth in the use of computationally intensive statistical procedures, especially with big data, has necessitated the usage of parallel computation on diverse platforms such as multicore, GPUs, clusters and clouds. However, slowdown due to interprocess communication costs typically limits such methods to "embarrassingly parallel" (EP algorithms, especially on non-shared memory platforms. This paper develops a broadlyapplicable method for converting many non-EP algorithms into statistically equivalent EP ones. The method is shown to yield excellent levels of speedup for a variety of statistical computations. It also overcomes certain problems of memory limitations.

  15. Materials and Life Science Experimental Facility at the Japan Proton Accelerator Research Complex III: Neutron Devices and Computational and Sample Environments

    Directory of Open Access Journals (Sweden)

    Kaoru Sakasai

    2017-08-01

    Full Text Available Neutron devices such as neutron detectors, optical devices including supermirror devices and 3He neutron spin filters, and choppers are successfully developed and installed at the Materials Life Science Facility (MLF of the Japan Proton Accelerator Research Complex (J-PARC, Tokai, Japan. Four software components of MLF computational environment, instrument control, data acquisition, data analysis, and a database, have been developed and equipped at MLF. MLF also provides a wide variety of sample environment options including high and low temperatures, high magnetic fields, and high pressures. This paper describes the current status of neutron devices, computational and sample environments at MLF.

  16. Complex adaptative systems and computational simulation in Archaeology

    Directory of Open Access Journals (Sweden)

    Salvador Pardo-Gordó

    2017-07-01

    Full Text Available Traditionally the concept of ‘complexity’ is used as a synonym for ‘complex society’, i.e., human groups with characteristics such as urbanism, inequalities, and hierarchy. The introduction of Nonlinear Systems and Complex Adaptive Systems to the discipline of archaeology has nuanced this concept. This theoretical turn has led to the rise of modelling as a method of analysis of historical processes. This work has a twofold objective: to present the theoretical current characterized by generative thinking in archaeology and to present a concrete application of agent-based modelling to an archaeological problem: the dispersal of the first ceramic production in the western Mediterranean.

  17. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  18. Computed tomography for radiographers

    International Nuclear Information System (INIS)

    Brooker, M.

    1986-01-01

    Computed tomography is regarded by many as a complicated union of sophisticated x-ray equipment and computer technology. This book overcomes these complexities. The rigid technicalities of the machinery and the clinical aspects of computed tomography are discussed including the preparation of patients, both physically and mentally, for scanning. Furthermore, the author also explains how to set up and run a computed tomography department, including advice on how the room should be designed

  19. Readout electronic for multichannel detectors

    CERN Document Server

    Kulibaba, V I; Naumov, S V

    2001-01-01

    Readout electronics based on the 128-channel chip 'Viking' (IDE AS inc., Norway) is considered. The chip 'Viking' integrates 128 low noise charge-sensitive preamplifiers with tunable CR-(RC) sup 2 shapers,analog memory and multiplexed readout to one output. All modules of readout electronics were designed and produced in KIPT taking into account the published recommendations of IDE AS inc.

  20. Readout electronic for multichannel detectors

    International Nuclear Information System (INIS)

    Kulibaba, V.I.; Maslov, N.I.; Naumov, S.V.

    2001-01-01

    Readout electronics based on the 128-channel chip 'Viking' (IDE AS inc., Norway) is considered. The chip 'Viking' integrates 128 low noise charge-sensitive preamplifiers with tunable CR-(RC) 2 shapers,analog memory and multiplexed readout to one output. All modules of readout electronics were designed and produced in KIPT taking into account the published recommendations of IDE AS inc

  1. Real-life applications with membrane computing

    CERN Document Server

    Zhang, Gexiang; Gheorghe, Marian

    2017-01-01

    This book thoroughly investigates the underlying theoretical basis of membrane computing models, and reveals their latest applications. In addition, to date there have been no illustrative case studies or complex real-life applications that capitalize on the full potential of the sophisticated membrane systems computational apparatus; gaps that this book remedies. By studying various complex applications – including engineering optimization, power systems fault diagnosis, mobile robot controller design, and complex biological systems involving data modeling and process interactions – the book also extends the capabilities of membrane systems models with features such as formal verification techniques, evolutionary approaches, and fuzzy reasoning methods. As such, the book offers a comprehensive and up-to-date guide for all researchers, PhDs and undergraduate students in the fields of computer science, engineering and the bio-sciences who are interested in the applications of natural computing models.

  2. Fourth International Conference on Complex Systems

    CERN Document Server

    Minai, Ali A; Unifying Themes in Complex Systems IV

    2008-01-01

    In June of 2002, over 500 professors, students and researchers met in Boston, Massachusetts for the Fourth International Conference on Complex Systems. The attendees represented a remarkably diverse collection of fields: biology, ecology, physics, engineering, computer science, economics, psychology and sociology, The goal of the conference was to encourage cross-fertilization between the many disciplines represented and to deepen understanding of the properties common to all complex systems. This volume contains 43 papers selected from the more than 200 presented at the conference. Topics include: cellular automata, neurology, evolution, computer science, network dynamics, and urban planning. About NECSI: For over 10 years, The New England Complex Systems Institute (NECSI) has been instrumental in the development of complex systems science and its applications. NECSI conducts research, education, knowledge dissemination, and community development around the world for the promotion of the study of complex sys...

  3. Third International Conference on Complex Systems

    CERN Document Server

    Minai, Ali A; Unifying Themes in Complex Systems

    2006-01-01

    In recent years, scientists have applied the principles of complex systems science to increasingly diverse fields. The results have been nothing short of remarkable: their novel approaches have provided answers to long-standing questions in biology, ecology, physics, engineering, computer science, economics, psychology and sociology. The Third International Conference on Complex Systems attracted over 400 researchers from around the world. The conference aimed to encourage cross-fertilization between the many disciplines represented and to deepen our understanding of the properties common to all complex systems. This volume contains over 35 papers selected from those presented at the conference on topics including: self-organization in biology, ecological systems, language, economic modeling, ecological systems, artificial life, robotics, and complexity and art. ALI MINAI is an Affiliate of the New England Complex Systems Institute and an Associate Professor in the Department of Electrical and Computer Engine...

  4. Complexity and Dynamical Depth

    Directory of Open Access Journals (Sweden)

    Terrence Deacon

    2014-07-01

    Full Text Available We argue that a critical difference distinguishing machines from organisms and computers from brains is not complexity in a structural sense, but a difference in dynamical organization that is not well accounted for by current complexity measures. We propose a measure of the complexity of a system that is largely orthogonal to computational, information theoretic, or thermodynamic conceptions of structural complexity. What we call a system’s dynamical depth is a separate dimension of system complexity that measures the degree to which it exhibits discrete levels of nonlinear dynamical organization in which successive levels are distinguished by local entropy reduction and constraint generation. A system with greater dynamical depth than another consists of a greater number of such nested dynamical levels. Thus, a mechanical or linear thermodynamic system has less dynamical depth than an inorganic self-organized system, which has less dynamical depth than a living system. Including an assessment of dynamical depth can provide a more precise and systematic account of the fundamental difference between inorganic systems (low dynamical depth and living systems (high dynamical depth, irrespective of the number of their parts and the causal relations between them.

  5. ZIVIS: A City Computing Platform Based on Volunteer Computing

    International Nuclear Information System (INIS)

    Antoli, B.; Castejon, F.; Giner, A.; Losilla, G.; Reynolds, J. M.; Rivero, A.; Sangiao, S.; Serrano, F.; Tarancon, A.; Valles, R.; Velasco, J. L.

    2007-01-01

    Abstract Volunteer computing has come up as a new form of distributed computing. Unlike other computing paradigms like Grids, which use to be based on complex architectures, volunteer computing has demonstrated a great ability to integrate dispersed, heterogeneous computing resources with ease. This article presents ZIVIS, a project which aims to deploy a city-wide computing platform in Zaragoza (Spain). ZIVIS is based on BOINC (Berkeley Open Infrastructure for Network Computing), a popular open source framework to deploy volunteer and desktop grid computing systems. A scientific code which simulates the trajectories of particles moving inside a stellarator fusion device, has been chosen as the pilot application of the project. In this paper we describe the approach followed to port the code to the BOINC framework as well as some novel techniques, based on standard Grid protocols, we have used to access the output data present in the BOINC server from a remote visualizer. (Author)

  6. Software For Computing Selected Functions

    Science.gov (United States)

    Grant, David C.

    1992-01-01

    Technical memorandum presents collection of software packages in Ada implementing mathematical functions used in science and engineering. Provides programmer with function support in Pascal and FORTRAN, plus support for extended-precision arithmetic and complex arithmetic. Valuable for testing new computers, writing computer code, or developing new computer integrated circuits.

  7. 8th Conference on Complex Networks

    CERN Document Server

    Menezes, Ronaldo; Sinatra, Roberta; Zlatic, Vinko

    2017-01-01

    This book collects the works presented at the 8th International Conference on Complex Networks (CompleNet) 2017 in Dubrovnik, Croatia, on March 21-24, 2017. CompleNet aims at bringing together researchers and practitioners working in areas related to complex networks. The past two decades has witnessed an exponential increase in the number of publications within this field. From biological systems to computer science, from economic to social systems, complex networks are becoming pervasive in many fields of science. It is this interdisciplinary nature of complex networks that CompleNet aims at addressing. The last decades have seen the emergence of complex networks as the language with which a wide range of complex phenomena in fields as diverse as physics, computer science, and medicine (to name a few) can be properly described and understood. This book provides a view of the state-of-the-art in this dynamic field and covers topics such as network controllability, social structure, online behavior, recommend...

  8. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  9. 1989 lectures in complex systems

    International Nuclear Information System (INIS)

    Jen, E.

    1990-01-01

    This report contains papers on the following topics: Lectures on a Theory of Computation and Complexity over the Reals; Algorithmic Information Content, Church-Turing Thesis, Physical Entroph, and Maxwell's Demon; Physical Measures of Complexity; An Introduction to Chaos and Prediction; Hamiltonian Chaos in Nonlinear Polarized Optical Beam; Chemical Oscillators and Nonlinear Chemical Dynamics; Isotropic Navier-Stokes Turbulence. I. Qualitative Features and Basic Equations; Isotropic Navier-Stokes Turbulence. II. Statistical Approximation Methods; Lattice Gases; Data-Parallel Computation and the Connection Machine; Preimages and Forecasting for Cellular Automata; Lattice-Gas Models for Multiphase Flows and Magnetohydrodynamics; Probabilistic Cellular Automata: Some Statistical Mechanical Considerations; Complexity Due to Disorder and Frustration; Self-Organization by Simulated Evolution; Theoretical Immunology; Morphogenesis by Cell Intercalation; and Theoretical Physics Meets Experimental Neurobiology

  10. Application of Computer Technologies in Building Design by Example of Original Objects of Increased Complexity

    Science.gov (United States)

    Vasilieva, V. N.

    2017-11-01

    The article deals with the solution of problems in AutoCAD offered at the All-Russian student Olympiads at the section of “Computer graphics” that are not typical for the students of construction specialties. The students are provided with the opportunity to study the algorithm for solving original tasks of high complexity. The article shows how the unknown parameter underlying the construction can be determined using a parametric drawing with geometric constraints and dimensional dependencies. To optimize the mark-up operation, the use of the command for projecting the points and lines of different types onto bodies and surfaces in different directions is shown. For the construction of a spring with a different pitch of turns, the paper describes the creation of a block from a part of the helix and its scaling when inserted into a model with unequal coefficients along the axes. The advantage of the NURBS surface and the application of the “body-surface-surface-NURBS-body” conversion are reflected to enhance the capabilities of both solid and surface modeling. The article’s material introduces construction students into the method of constructing complex models in AutoCAD that are not similar to typical training assignments.

  11. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  12. Solving computationally expensive engineering problems

    CERN Document Server

    Leifsson, Leifur; Yang, Xin-She

    2014-01-01

    Computational complexity is a serious bottleneck for the design process in virtually any engineering area. While migration from prototyping and experimental-based design validation to verification using computer simulation models is inevitable and has a number of advantages, high computational costs of accurate, high-fidelity simulations can be a major issue that slows down the development of computer-aided design methodologies, particularly those exploiting automated design improvement procedures, e.g., numerical optimization. The continuous increase of available computational resources does not always translate into shortening of the design cycle because of the growing demand for higher accuracy and necessity to simulate larger and more complex systems. Accurate simulation of a single design of a given system may be as long as several hours, days or even weeks, which often makes design automation using conventional methods impractical or even prohibitive. Additional problems include numerical noise often pr...

  13. Computers in writing instruction

    NARCIS (Netherlands)

    Schwartz, Helen J.; van der Geest, Thea; Smit-Kreuzen, Marlies

    1992-01-01

    For computers to be useful in writing instruction, innovations should be valuable for students and feasible for teachers to implement. Research findings yield contradictory results in measuring the effects of different uses of computers in writing, in part because of the methodological complexity of

  14. Computer software.

    Science.gov (United States)

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  15. Cleft lip and palate subjects prevalence of abnormal stylohyoid complex and tonsilloliths on cone beam computed tomography.

    Science.gov (United States)

    Cazas-Duran, Eymi Valery; Fischer Rubira-Bullen, Izabel Regina; Pagin, Otávio; Stuchi Centurion-Pagin, Bruna

    Tonsilloliths and abnormal stylohyoid complex may have similar symptoms to others of different aetiology. Individuals with cleft lip and palate describe similar symptoms because of the anatomical implications that are peculiar to this anomaly. The aim of this study was to determine the prevalence of abnormal stylohyoid complex and tonsilloliths on cone beam computed tomography in individuals with cleft lip and palate. According to the inclusion and exclusion criteria, 66 CT scans out of of 2,794 were analysed, on i- Cat ® vision software with 0.8 index Kappa intra-examiner. The total prevalence of ossification of the incomplete stylohyoid complex in individuals with cleft lip and palate was 66.6%; the prevalence of these findings in females was 75% and 61.9% in males. The total prevalence of tonsilloliths was 7.5%. It is important to ascertain calcification of the stylohyoid complex and tonsilloliths in the radiological report, due to the anatomical proximity and similarsymptomatology to other orofacial impairments inindividuals with cleft lip and palate, focusing on females with oral cleft formation, patients with incisive trans foramen cleft and incisive post foramen cleft because they are more prevalent. Greater knowledge of the anatomical morphometry of individuals with cleft lip and palate greatly contributes towards the selection of clinical behaviours and the quality of life of these patients, since cleft lip and palateis one of the most common anomalies. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  16. Computing with Mathematica

    CERN Document Server

    Hoft, Margret H

    2002-01-01

    Computing with Mathematica, 2nd edition is engaging and interactive. It is designed to teach readers how to use Mathematica efficiently for solving problems arising in fields such as mathematics, computer science, physics, and engineering. The text moves from simple to complex, often following a specific example on a number of different levels. This gradual increase incomplexity allows readers to steadily build their competence without being overwhelmed. The 2nd edition of this acclaimed book features:* An enclosed CD for Mac and Windows that contains the entire text as acollection of Mathematica notebooks* Substantive real world examples* Challenging exercises, moving from simple to complex* A collection of interactive projects from a variety of applications "I really think this is an almost perfect text." -Stephen Brick, University of South Alabama* Substantive real world examples * Challenging exercises, moving from simple to complex examples * Interactive explorations (on the included CD-ROM) from a ...

  17. A Proposal for Cardiac Arrhythmia Classification using Complexity Measures

    Directory of Open Access Journals (Sweden)

    AROTARITEI, D.

    2017-08-01

    Full Text Available Cardiovascular diseases are one of the major problems of humanity and therefore one of their component, arrhythmia detection and classification drawn an increased attention worldwide. The presence of randomness in discrete time series, like those arising in electrophysiology, is firmly connected with computational complexity measure. This connection can be used, for instance, in the analysis of RR-intervals of electrocardiographic (ECG signal, coded as binary string, to detect and classify arrhythmia. Our approach uses three algorithms (Lempel-Ziv, Sample Entropy and T-Code to compute the information complexity applied and a classification tree to detect 13 types of arrhythmia with encouraging results. To overcome the computational effort required for complexity calculus, a cloud computing solution with executable code deployment is also proposed.

  18. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  19. Computational modelling of oxygenation processes in enzymes and biomimetic model complexes

    OpenAIRE

    de Visser, Sam P.; Quesne, Matthew G.; Martin, Bodo; Comba, Peter; Ryde, Ulf

    2014-01-01

    With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and hig...

  20. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  1. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  2. Emergent computation a festschrift for Selim G. Akl

    CERN Document Server

    2017-01-01

    This book is dedicated to Professor Selim G. Akl to honour his groundbreaking research achievements in computer science over four decades. The book is an intellectually stimulating excursion into emergent computing paradigms, architectures and implementations. World top experts in computer science, engineering and mathematics overview exciting and intriguing topics of musical rhythms generation algorithms, analyse the computational power of random walks, dispelling a myth of computational universality, computability and complexity at the microscopic level of synchronous computation, descriptional complexity of error detection, quantum cryptography, context-free parallel communicating grammar systems, fault tolerance of hypercubes, finite automata theory of bulk-synchronous parallel computing, dealing with silent data corruptions in high-performance computing, parallel sorting on graphics processing units, mining for functional dependencies in relational databases, cellular automata optimisation of wireless se...

  3. Advances in unconventional computing

    CERN Document Server

    2017-01-01

    The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices. This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing. The book is the encyclopedia, the first ever complete autho...

  4. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  5. Design of magnetic coordination complexes for quantum computing.

    Science.gov (United States)

    Aromí, Guillem; Aguilà, David; Gamez, Patrick; Luis, Fernando; Roubeau, Olivier

    2012-01-21

    A very exciting prospect in coordination chemistry is to manipulate spins within magnetic complexes for the realization of quantum logic operations. An introduction to the requirements for a paramagnetic molecule to act as a 2-qubit quantum gate is provided in this tutorial review. We propose synthetic methods aimed at accessing such type of functional molecules, based on ligand design and inorganic synthesis. Two strategies are presented: (i) the first consists in targeting molecules containing a pair of well-defined and weakly coupled paramagnetic metal aggregates, each acting as a carrier of one potential qubit, (ii) the second is the design of dinuclear complexes of anisotropic metal ions, exhibiting dissimilar environments and feeble magnetic coupling. The first systems obtained from this synthetic program are presented here and their properties are discussed.

  6. Computation as an Unbounded Process

    Czech Academy of Sciences Publication Activity Database

    van Leeuwen, J.; Wiedermann, Jiří

    2012-01-01

    Roč. 429, 20 April (2012), s. 202-212 ISSN 0304-3975 R&D Projects: GA ČR GAP202/10/1333 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetical hierarchy * hypercomputation * mind change complexity * nondeterminism * relativistic computation * unbounded computation Subject RIV: IN - Informatics, Computer Science Impact factor: 0.489, year: 2012

  7. A new decision sciences for complex systems

    OpenAIRE

    Lempert, Robert J.

    2002-01-01

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an...

  8. Thermodynamic properties of actinide complexes

    International Nuclear Information System (INIS)

    Di Bernardo, P.; Tomat, G.; Bismondo, A.

    1980-01-01

    The present paper reports a continuation of investigations on the complexing ability of substituted polycarboxylate ligands toward the uranyl(VI) ion. The changes in free energy were computed from the stability constants determined by potentiometric measurements; the enthalpy changes were measured by direct calorimetric titrations. The acid formation constants and the complex formation constants were calculated with the aid of a CDC/CRYBER '76 computer using the programs LETAGROP VRID and MINIQUAD 75. The enthalpy changes for the proton ligand and metal ligand complex formation were calculated by the least-squares program LETAGROP KALLE. The data obtained for a relatively wide range of concentrations of the metal and hydrogen ions may be interpreted in terms of the formation of simple mononuclear, ML, and acid complexes, Msub(p)Hsub(q)Lsub(r), where p = 1; q = 1, 2; r = 1, 2. The values of free energy enthalpy, and entropy changes for the systems investigated are reported together with the logarithms of the corresponding stability constants. (author)

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  10. Energy conserving numerical methods for the computation of complex vortical flows

    Science.gov (United States)

    Allaneau, Yves

    One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our

  11. Application of computed tomography and stereolithography to correct a complex angular and torsional limb deformity in a donkey.

    Science.gov (United States)

    Radtke, Alexandra; Morello, Samantha; Muir, Peter; Arnoldy, Courtney; Bleedorn, Jason

    2017-11-01

    To report the evaluation, surgical planning, and outcome for correction of a complex limb deformity in the tibia of a donkey using computed tomographic (CT) imaging and a 3D bone model. Case report. A 1.5-year-old, 110 kg donkey colt with an angular and torsional deformity of the right pelvic limb. Findings on physical examimation included a severe, complex deformity of the right pelvic limb that substantially impeded ambulation. Both hind limbs were imaged via CT, and imaging software was used to characterize the bone deformity. A custom stereolithographic bone model was printed for preoperative planning and rehersal of the surgery. A closing wedge ostectomy with de-rotation of the tibia was stabilized with 2 precontoured 3.5-mm locking compression plates. Clinical follow-up was available for 3.5 years postoperatively. CT allowed characterization of the angular and torsional bone deformity of the right tibia. A custom bone model facilitated surgical planning and rehearsal of the procedure. Tibial corrective ostectomy was performed without complication. Postoperative management included physical rehabilitation to help restore muscular function and pelvic limb mechanics. Short-term and long-term follow-up confirmed bone healing and excellent clinical function. CT imaging and stereolithography facilitated the evaluation and surgical planning of a complex limb deformity. This combination of techniques may improve the accuracy of the surgeons' evaluation of complex bone deformities in large animals, shorten operating times, and improve outcomes. © 2017 The American College of Veterinary Surgeons.

  12. Construct validity of the pediatric evaluation of disability inventory computer adaptive test (PEDI-CAT) in children with medical complexity.

    Science.gov (United States)

    Dumas, Helene M; Fragala-Pinkham, Maria A; Rosen, Elaine L; O'Brien, Jane E

    2017-11-01

    To assess construct (convergent and divergent) validity of the Pediatric Evaluation of Disability Inventory Computer Adaptive Test (PEDI-CAT) in a sample of children with complex medical conditions. Demographics, clinical information, PEDI-CAT normative score, and the Post-Acute Acuity Rating for Children (PAARC) level were collected for all post-acute hospital admissions (n = 110) from 1 April 2015 to 1 March 2016. Correlations between the PEDI-CAT Daily Activities, Mobility, and Social/Cognitive domain scores for the total sample and across three age groups (infant, preschool, and school-age) were calculated. Differences in mean PEDI-CAT scores for each domain across two groups, children with "Less Complexity," or "More Complexity" based on PAARC level were examined. All correlations for the total sample and age subgroups were statistically significant and trends across age groups were evident with the stronger associations between domains for the infant group. Significant differences were found between mean PEDI-CAT Daily Activities, Mobility, and Social/Cognitive normative scores across the two complexity groups with children in the "Less Complex" group having higher PEDI-CAT scores for all domains. This study provides evidence indicating the PEDI-CAT can be used with confidence in capturing and differentiating children's level of function in a post-acute care setting. Implications for Rehabilitation The PEDI-CAT is measure of function for children with a variety of conditions and can be used in any clinical setting. Convergent validity of the PEDI-CAT's Daily Activities, Mobility, and Social/Cognitive domains was significant and particularly strong for infants and young children with medical complexity. The PEDI-CAT was able to discriminate groups of children with differing levels of medical complexity admitted to a pediatric post-acute care hospital.

  13. Quantum computing based on semiconductor nanowires

    NARCIS (Netherlands)

    Frolov, S.M.; Plissard, S.R.; Nadj-Perge, S.; Kouwenhoven, L.P.; Bakkers, E.P.A.M.

    2013-01-01

    A quantum computer will have computational power beyond that of conventional computers, which can be exploited for solving important and complex problems, such as predicting the conformations of large biological molecules. Materials play a major role in this emerging technology, as they can enable

  14. Study of complex modes

    International Nuclear Information System (INIS)

    Pastrnak, J.W.

    1986-01-01

    This eighteen-month study has been successful in providing the designer and analyst with qualitative guidelines on the occurrence of complex modes in the dynamics of linear structures, and also in developing computer codes for determining quantitatively which vibration modes are complex and to what degree. The presence of complex modes in a test structure has been verified. Finite element analysis of a structure with non-proportional dumping has been performed. A partial differential equation has been formed to eliminate possible modeling errors

  15. The Convergence of the telematic, computing and information services as a basis for using artificial intelligence to manage complex techno-organizational systems

    Directory of Open Access Journals (Sweden)

    Raikov Alexander

    2018-01-01

    Full Text Available The authors analyses the problems of using artificial intelligence to manage complex techno-organizational systems on the basis of the convergence of the telematic, computing and information services in order to manage complex techno-organizational systems in the aerospace industry. This means getting the space objects a higher level of management based on the self-organizing integration principle. Using the artificial intelligence elements allows us to get more optimal and limit values parameters of the ordinal and critical situations in real time. Thus, it helps us to come closer to the limit values parameters of the managed objects due to rising managing and observant possibilities.

  16. Computational identification of binding energy hot spots in protein-RNA complexes using an ensemble approach.

    Science.gov (United States)

    Pan, Yuliang; Wang, Zixiang; Zhan, Weihua; Deng, Lei

    2018-05-01

    Identifying RNA-binding residues, especially energetically favored hot spots, can provide valuable clues for understanding the mechanisms and functional importance of protein-RNA interactions. Yet, limited availability of experimentally recognized energy hot spots in protein-RNA crystal structures leads to the difficulties in developing empirical identification approaches. Computational prediction of RNA-binding hot spot residues is still in its infant stage. Here, we describe a computational method, PrabHot (Prediction of protein-RNA binding hot spots), that can effectively detect hot spot residues on protein-RNA binding interfaces using an ensemble of conceptually different machine learning classifiers. Residue interaction network features and new solvent exposure characteristics are combined together and selected for classification with the Boruta algorithm. In particular, two new reference datasets (benchmark and independent) have been generated containing 107 hot spots from 47 known protein-RNA complex structures. In 10-fold cross-validation on the training dataset, PrabHot achieves promising performances with an AUC score of 0.86 and a sensitivity of 0.78, which are significantly better than that of the pioneer RNA-binding hot spot prediction method HotSPRing. We also demonstrate the capability of our proposed method on the independent test dataset and gain a competitive advantage as a result. The PrabHot webserver is freely available at http://denglab.org/PrabHot/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  17. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  18. Linking computer-aided design (CAD) to Geant4-based Monte Carlo simulations for precise implementation of complex treatment head geometries

    International Nuclear Information System (INIS)

    Constantin, Magdalena; Constantin, Dragos E; Keall, Paul J; Narula, Anisha; Svatos, Michelle; Perl, Joseph

    2010-01-01

    Most of the treatment head components of medical linear accelerators used in radiation therapy have complex geometrical shapes. They are typically designed using computer-aided design (CAD) applications. In Monte Carlo simulations of radiotherapy beam transport through the treatment head components, the relevant beam-generating and beam-modifying devices are inserted in the simulation toolkit using geometrical approximations of these components. Depending on their complexity, such approximations may introduce errors that can be propagated throughout the simulation. This drawback can be minimized by exporting a more precise geometry of the linac components from CAD and importing it into the Monte Carlo simulation environment. We present a technique that links three-dimensional CAD drawings of the treatment head components to Geant4 Monte Carlo simulations of dose deposition. (note)

  19. Computational Analysis of SAXS Data Acquisition.

    Science.gov (United States)

    Dong, Hui; Kim, Jin Seob; Chirikjian, Gregory S

    2015-09-01

    Small-angle x-ray scattering (SAXS) is an experimental biophysical method used for gaining insight into the structure of large biomolecular complexes. Under appropriate chemical conditions, the information obtained from a SAXS experiment can be equated to the pair distribution function, which is the distribution of distances between every pair of points in the complex. Here we develop a mathematical model to calculate the pair distribution function for a structure of known density, and analyze the computational complexity of these calculations. Efficient recursive computation of this forward model is an important step in solving the inverse problem of recovering the three-dimensional density of biomolecular structures from their pair distribution functions. In particular, we show that integrals of products of three spherical-Bessel functions arise naturally in this context. We then develop an algorithm for the efficient recursive computation of these integrals.

  20. RADCHARM++: A C++ routine to compute the electromagnetic radiation generated by relativistic charged particles in crystals and complex structures

    Energy Technology Data Exchange (ETDEWEB)

    Bandiera, Laura; Bagli, Enrico; Guidi, Vincenzo [INFN Sezione di Ferrara and Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, 44121 Ferrara (Italy); Tikhomirov, Victor V. [Research Institute for Nuclear Problems, Belarusian State University, Minsk (Belarus)

    2015-07-15

    The analytical theories of coherent bremsstrahlung and channeling radiation well describe the process of radiation generation in crystals under some special cases. However, the treatment of complex situations requires the usage of a more general approach. In this report we present a C++ routine, named RADCHARM++, to compute the electromagnetic radiation emitted by electrons and positrons in crystals and complex structures. In the RADCHARM++ routine, the model for the computation of e.m. radiation generation is based on the direct integration of the quasiclassical formula of Baier and Katkov. This approach allows one taking into account real trajectories, and thereby the contribution of incoherent scattering. Such contribution can be very important in many cases, for instance for electron channeling. The generality of the Baier–Katkov operator method permits one to simulate the electromagnetic radiation emitted by electrons/positrons in very different cases, e.g., in straight, bent and periodically bent crystals, and for different beam energy ranges, from sub-GeV to TeV and above. The RADCHARM++ routine has been implemented in the Monte Carlo code DYNECHARM++, which solves the classical equation of motion of charged particles traveling through a crystal under the continuum potential approximation. The code has proved to reproduce the results of experiments performed at the MAinzer MIkrotron (MAMI) with 855 MeV electrons and has been used to predict the radiation spectrum generated by the same electron beam in a bent crystal.

  1. Computational Investigation on the Spectroscopic Properties of Thiophene Based Europium β-Diketonate Complexes.

    Science.gov (United States)

    Greco, Claudio; Moro, Giorgio; Bertini, Luca; Biczysko, Malgorzata; Barone, Vincenzo; Cosentino, Ugo

    2014-02-11

    differences between calculated and experimental results are observed for compound 4, for which difficulties in the experimental determination of the triplet state energy were encountered: our results show that the negligible photoluminescence quantum yield of this compound is due to the fact that the energy of the most stable triplet state is significantly lower than that of the resonance level of the Europium ion, and thus the energy transfer process is prevented. These results confirm the reliability of the adopted computational approach in calculating the energy of the lowest triplet state energy of these systems, a key parameter in the design of new ligands for lanthanide complexes presenting large photoluminescence quantum yields.

  2. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  3. Computing quantum discord is NP-complete

    International Nuclear Information System (INIS)

    Huang, Yichen

    2014-01-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable. (paper)

  4. Shapes of interacting RNA complexes

    DEFF Research Database (Denmark)

    Fu, Benjamin Mingming; Reidys, Christian

    2014-01-01

    Shapes of interacting RNA complexes are studied using a filtration via their topological genus. A shape of an RNA complex is obtained by (iteratively) collapsing stacks and eliminating hairpin loops.This shape-projection preserves the topological core of the RNA complex and for fixed topological...... genus there are only finitely many such shapes. Our main result is a new bijection that relates the shapes of RNA complexes with shapes of RNA structures. This allows to compute the shape polynomial of RNA complexes via the shape polynomial of RNA structures. We furthermore present a linear time uniform...... sampling algorithm for shapes of RNA complexes of fixed topological genus....

  5. Assessment of the Stylohyoid Complex with Cone Beam Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    İlgüy, Dilhan; İlgüy, Mehmet; Fişekçioğlu, Erdoğan; Dölekoğlu, Semanur [Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Yeditepe University, Istanbul (Turkey)

    2012-12-27

    Orientation of the stylohyoid complex (SHC) may be important for evaluation of the patient with orofacial pain or dysphagia. Our purpose was to assess the length and angulations of SHC using cone beam computed tomography (CBCT). In this study, 3D images provided by CBCT of 69 patients (36 females, 33 males, age range 15-77 years) were retrospectively evaluated. All CBCT images were performed because of other indications. None of the patients had symptoms of ossified SHC. The length and the thickness of SHC ossification, the anteroposterior angle (APA) and the mediolateral angle (MLA) were measured by maxillofacial radiologists on the anteroposterior, right lateral and left lateral views of CBCT. Student’s t test, Pearson's correlation and Chi-square test tests were used for statistical analysis. According to the results, the mean length of SHC was 25.3 ± 11.3 mm and the mean thickness of SHC was 4.8 ± 1.8 mm in the study group. The mean APA value of SHCs was 25.6° ± 5.4° and the mean MLA value was 66.4° ± 6.7°. A positive correlation coefficient was found between age and APA (r = 0.335; P < 0.01); between thickness and APA (r = 0.448; P < 0.01) and also between length and thickness was found (r=0.236). The size and morphology of the SHC can be easily assessed by 3D views provided by CBCT. In CBCT evaluation of the head and neck region, the radiologist should consider SHC according to these variations, which may have clinical importance.

  6. Assessment of the Stylohyoid Complex with Cone Beam Computed Tomography

    International Nuclear Information System (INIS)

    İlgüy, Dilhan; İlgüy, Mehmet; Fişekçioğlu, Erdoğan; Dölekoğlu, Semanur

    2012-01-01

    Orientation of the stylohyoid complex (SHC) may be important for evaluation of the patient with orofacial pain or dysphagia. Our purpose was to assess the length and angulations of SHC using cone beam computed tomography (CBCT). In this study, 3D images provided by CBCT of 69 patients (36 females, 33 males, age range 15-77 years) were retrospectively evaluated. All CBCT images were performed because of other indications. None of the patients had symptoms of ossified SHC. The length and the thickness of SHC ossification, the anteroposterior angle (APA) and the mediolateral angle (MLA) were measured by maxillofacial radiologists on the anteroposterior, right lateral and left lateral views of CBCT. Student’s t test, Pearson's correlation and Chi-square test tests were used for statistical analysis. According to the results, the mean length of SHC was 25.3 ± 11.3 mm and the mean thickness of SHC was 4.8 ± 1.8 mm in the study group. The mean APA value of SHCs was 25.6° ± 5.4° and the mean MLA value was 66.4° ± 6.7°. A positive correlation coefficient was found between age and APA (r = 0.335; P < 0.01); between thickness and APA (r = 0.448; P < 0.01) and also between length and thickness was found (r=0.236). The size and morphology of the SHC can be easily assessed by 3D views provided by CBCT. In CBCT evaluation of the head and neck region, the radiologist should consider SHC according to these variations, which may have clinical importance

  7. Interactive Evolution of Complex Behaviours Through Skill Encapsulation

    DEFF Research Database (Denmark)

    González de Prado Salas, Pablo; Risi, Sebastian

    2017-01-01

    Human-based computation (HBC) is an emerging research area in which humans and machines collaborate to solve tasks that neither one can solve in isolation. In evolutionary computation, HBC is often realized through interactive evolutionary computation (IEC), in which a user guides evolution by it...... in evolutionary computation and, as the results in this paper show, IEC-ESP is able to solve complex control problems that are challenging for a traditional fitness-based approach.......Human-based computation (HBC) is an emerging research area in which humans and machines collaborate to solve tasks that neither one can solve in isolation. In evolutionary computation, HBC is often realized through interactive evolutionary computation (IEC), in which a user guides evolution...... by iteratively selecting the parents for the next generation. IEC has shown promise in a variety of different domains, but evolving more complex or hierarchically composed behaviours remains challenging with the traditional IEC approach. To overcome this challenge, this paper combines the recently introduced ESP...

  8. Semiotics of constructed complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Landauer, C.; Bellman, K.L.

    1996-12-31

    The scope of this paper is limited to software and other constructed complex systems mediated or integrated by software. Our research program studies foundational issues that we believe will help us develop a theoretically sound approach to constructing complex systems. There have really been only two theoretical approaches that have helped us understand and develop computational systems: mathematics and linguistics. We show how semiotics can also play a role, whether we think of it as part of these other theories or as subsuming one or both of them. We describe our notion of {open_quotes}computational semiotics{close_quotes}, which we define to be the study of computational methods of dealing with symbols, show how such a theory might be formed, and describe what we might get from it in terms of more interesting use of symbols by computing systems. This research was supported in part by the Federal Highway Administration`s Office of Advanced Research and by the Advanced Research Projects Agency`s Software and Intelligent Systems Technology Office.

  9. Collectives and the design of complex systems

    CERN Document Server

    Wolpert, David

    2004-01-01

    Increasingly powerful computers are making possible distributed systems comprised of many adaptive and self-motivated computational agents. Such systems, when distinguished by system-level performance criteria, are known as "collectives." Collectives and the Design of Complex Systems lays the foundation for a science of collectives and describes how to design them for optimal performance. An introductory survey chapter is followed by descriptions of information-processing problems that can only be solved by the joint actions of large communities of computers, each running its own complex, decentralized machine-learning algorithm. Subsequent chapters analyze the dynamics and structures of collectives, as well as address economic, model-free, and control-theory approaches to designing complex systems. The work assumes a modest understanding of basic statistics and calculus. Topics and Features: Introduces the burgeoning science of collectives and its practical applications in a single useful volume Combines ap...

  10. Computational Intelligence in Intelligent Data Analysis

    CERN Document Server

    Nürnberger, Andreas

    2013-01-01

    Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intell...

  11. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...

  12. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    Directory of Open Access Journals (Sweden)

    Shuo Gu

    2017-01-01

    Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  13. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    Science.gov (United States)

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  14. Development of Soft Computing and Applications in Agricultural and Biological Engineering

    Science.gov (United States)

    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...

  15. Granular computing: perspectives and challenges.

    Science.gov (United States)

    Yao, JingTao; Vasilakos, Athanasios V; Pedrycz, Witold

    2013-12-01

    Granular computing, as a new and rapidly growing paradigm of information processing, has attracted many researchers and practitioners. Granular computing is an umbrella term to cover any theories, methodologies, techniques, and tools that make use of information granules in complex problem solving. The aim of this paper is to review foundations and schools of research and to elaborate on current developments in granular computing research. We first review some basic notions of granular computing. Classification and descriptions of various schools of research in granular computing are given. We also present and identify some research directions in granular computing.

  16. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    International Nuclear Information System (INIS)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M

    2013-01-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  17. Complex of two-dimensional multigroup programs for neutron-physical computations of nuclear reactor

    International Nuclear Information System (INIS)

    Karpov, V.A.; Protsenko, A.N.

    1975-01-01

    Briefly stated mathematical aspects of the two-dimensional multigroup method of neutron-physical computation of nuclear reactor. Problems of algorithmization and BESM-6 computer realisation of multigroup diffuse approximations in hexagonal and rectangular calculated lattices are analysed. The results of computation of fast critical assembly having complicated composition of the core are given. The estimation of computation accuracy of criticality, neutron fields distribution and efficiency of absorbing rods by means of computer programs developed is done. (author)

  18. Reconfigurable Computing

    CERN Document Server

    Cardoso, Joao MP

    2011-01-01

    As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp

  19. Computed Tomography evaluation of maxillofacial injuries

    Directory of Open Access Journals (Sweden)

    V Natraj Prasad

    2017-01-01

    Full Text Available Background & Objectives: The maxillofacial region, a complex anatomical structure, can be evaluated by conventional (plain films, Tomography, Multidetector Computed Tomography, Three-Dimensional Computed Tomography, Orthopantomogram and Magnetic Resonance Imaging. The study was conducted with objective of describing various forms of maxillofacial injuries, imaging features of different types of maxillofacial fractures and the advantage of using Three- Dimensional Computed Tomography reconstructed image. Materials & Methods: A hospital based cross-sectional study was conducted among 50 patients during April 2014 to September 2016 using Toshiba Aquilion Prime 160 slice Multi Detector Computed Tomography scanner.Results: The maxillofacial fractures were significantly higher in male population (88% than female population (12 %. Road traffic accidents were the most common cause of injury others being physical assault and fall from height. It was most common in 31-40 years (26% and 21-30 (24% years age group. Maxillary sinus was the commonest fracture (36% followed by nasal bone and zygomatic bone (30%, mandible and orbital bones (28%. Soft tissue swelling was the commonest associated finding. Three dimensional images (3 D compared to the axial scans missed some fractures. However, the extension of the complex fracture lines and degree of displacement were more accurately assessed. Complex fractures found were Le fort (6% and naso-orbito-ethmoid (4% fractures.Conclusion: The proper evaluation of complex anatomy of the facial bones requires Multidetector Computed Tomography which offers excellent spatial resolution enabling multiplanar reformations and three dimensional reconstructions for enhanced diagnostic accuracy and surgical planning.

  20. Cyberinfrastructure for Computational Relativistic Astrophysics

    OpenAIRE

    Ott, Christian

    2012-01-01

    Poster presented at the NSF Office of Cyberinfrastructure CyberBridges CAREER PI workshop. This poster discusses the computational challenges involved in the modeling of complex relativistic astrophysical systems. The Einstein Toolkit is introduced. It is an open-source community infrastructure for numerical relativity and computational astrophysics.

  1. Experimental approach for the uncertainty assessment of 3D complex geometry dimensional measurements using computed tomography at the mm and sub-mm scales

    DEFF Research Database (Denmark)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...

  2. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  3. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    OpenAIRE

    Shuo Gu; Jianfeng Pei

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regula...

  4. Design Anthropology, Emerging Technologies and Alternative Computational Futures

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte

    Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures.......Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures....

  5. WE-D-303-00: Computational Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John [Duke University Medical Center, Durham, NC (United States); Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA (United States)

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computational phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.

  6. WE-D-303-00: Computational Phantoms

    International Nuclear Information System (INIS)

    Lewis, John

    2015-01-01

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computational phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems

  7. Engineering and Computing Portal to Solve Environmental Problems

    Science.gov (United States)

    Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.

    2018-01-01

    This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.

  8. Ordinal optimization and its application to complex deterministic problems

    Science.gov (United States)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  9. Complexity in Dynamical Systems

    Science.gov (United States)

    Moore, Cristopher David

    The study of chaos has shown us that deterministic systems can have a kind of unpredictability, based on a limited knowledge of their initial conditions; after a finite time, the motion appears essentially random. This observation has inspired a general interest in the subject of unpredictability, and more generally, complexity; how can we characterize how "complex" a dynamical system is?. In this thesis, we attempt to answer this question with a paradigm of complexity that comes from computer science, we extract sets of symbol sequences, or languages, from a dynamical system using standard methods of symbolic dynamics; we then ask what kinds of grammars or automata are needed a generate these languages. This places them in the Chomsky heirarchy, which in turn tells us something about how subtle and complex the dynamical system's behavior is. This gives us insight into the question of unpredictability, since these automata can also be thought of as computers attempting to predict the system. In the culmination of the thesis, we find a class of smooth, two-dimensional maps which are equivalent to the highest class in the Chomsky heirarchy, the turning machine; they are capable of universal computation. Therefore, these systems possess a kind of unpredictability qualitatively different from the usual "chaos": even if the initial conditions are known exactly, questions about the system's long-term dynamics are undecidable. No algorithm exists to answer them. Although this kind of unpredictability has been discussed in the context of distributed, many-degree-of -freedom systems (for instance, cellular automata) we believe this is the first example of such phenomena in a smooth, finite-degree-of-freedom system.

  10. Constructivity and computability in historical and philosophical perspective

    CERN Document Server

    Dubucs, Jacques

    2014-01-01

    Ranging from Alan Turing's seminal 1936 paper to the latest work on Kolmogorov complexity and linear logic, this comprehensive new work clarifies the relationship between computability on the one hand and constructivity on the other. The authors argue that even though constructivists have largely shed Brouwer's solipsistic attitude to logic, there remain points of disagreement to this day.Focusing on the growing pains computability experienced as it was forced to address the demands of rapidly expanding applications, the content maps the developments following Turing's ground-breaking linkage of computation and the machine, the resulting birth of complexity theory, the innovations of Kolmogorov complexity and resolving the dissonances between proof theoretical semantics and canonical proof feasibility. Finally, it explores one of the most fundamental questions concerning the interface between constructivity and computability: whether the theory of recursive functions is needed for a rigorous development of co...

  11. The Use of Hebbian Cell Assemblies for Nonlinear Computation

    DEFF Research Database (Denmark)

    Tetzlaff, Christian; Dasgupta, Sakyasingha; Kulvicius, Tomas

    2015-01-01

    When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preser...... computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control....

  12. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming; Claudel, Christian

    2017-01-01

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  13. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming

    2017-02-02

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  14. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  15. Computation at the edge of chaos: Phase transition and emergent computation

    International Nuclear Information System (INIS)

    Langton, C.

    1990-01-01

    In order for computation to emerge spontaneously and become an important factor in the dynamics of a system, the material substrate must support the primitive functions required for computation: the transmission, storage, and modification of information. Under what conditions might we expect physical systems to support such computational primitives? This paper presents research on Cellular Automata which suggests that the optimal conditions for the support of information transmission, storage, and modification, are achieved in the vicinity of a phase transition. We observe surprising similarities between the behaviors of computations and systems near phase-transitions, finding analogs of computational complexity classes and the Halting problem within the phenomenology of phase-transitions. We conclude that there is a fundamental connection between computation and phase-transitions, and discuss some of the implications for our understanding of nature if such a connection is borne out. 31 refs., 16 figs

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  17. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  18. Computer aided operation of complex systems

    International Nuclear Information System (INIS)

    Goodstein, L.P.

    1985-09-01

    Advanced technology is having the effect that industrial systems are becoming more highly automated and do not rely on human intervention for the control of normally planned and/or predicted situations. Thus the importance of the operator has shifted from being a manual controller to becoming more of a systems manager and supervisory controller. At the same time, the use of advanced information technology in the control room and its potential impact on human-machine capabilities places additional demands on the designer. This report deals with work carried out to describe the plant-operator relationship in order to systematize the design and evaluation of suitable information systems in the control room. This design process starts with the control requirements from the plant and transforms them into corresponding sets of decision-making tasks with appropriate allocation of responsibilities between computer and operator. To further effectivize this cooperation, appropriate information display and accession are identified. The conceptual work has been supported by experimental studies on a small-scale simulator. (author)

  19. Digital computers in action

    CERN Document Server

    Booth, A D

    1965-01-01

    Digital Computers in Action is an introduction to the basics of digital computers as well as their programming and various applications in fields such as mathematics, science, engineering, economics, medicine, and law. Other topics include engineering automation, process control, special purpose games-playing devices, machine translation and mechanized linguistics, and information retrieval. This book consists of 14 chapters and begins by discussing the history of computers, from the idea of performing complex arithmetical calculations to the emergence of a modern view of the structure of a ge

  20. Time complexity analysis for distributed memory computers: implementation of parallel conjugate gradient method

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.

    1991-01-01

    New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments

  1. Interactive design computation : A case study on quantum design paradigm

    NARCIS (Netherlands)

    Feng, H.

    2013-01-01

    The ever-increasing complexity of design processes fosters novel design computation models to be employed in architectural research and design in order to facilitate accurate data processing and refined decision making. These computation models have enabled designers to work with complex geometry

  2. Technical Development and Application of Soft Computing in Agricultural and Biological Engineering

    Science.gov (United States)

    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...

  3. Low-complexity computer simulation of multichannel room impulse responses

    NARCIS (Netherlands)

    Martínez Castañeda, J.A.

    2013-01-01

    The "telephone'' model has been, for the last one hundred thirty years, the base of modern telecommunications with virtually no changes in its fundamental concept. The arise of smaller and more powerful computing devices have opened new possibilities. For example, to build systems able to give to

  4. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  5. The role of real-time in biomedical science: a meta-analysis on computational complexity, delay and speedup.

    Science.gov (United States)

    Faust, Oliver; Yu, Wenwei; Rajendra Acharya, U

    2015-03-01

    The concept of real-time is very important, as it deals with the realizability of computer based health care systems. In this paper we review biomedical real-time systems with a meta-analysis on computational complexity (CC), delay (Δ) and speedup (Sp). During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim of achieving real-time, with a discussion on CC or Sp. These papers were analyzed in terms of processing system used, application area (AA), CC, Δ, Sp, implementation/algorithm (I/A) and competition. The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on implementation (I). Most authors compete on big O notation (O) and processing time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Fast Computing for Distance Covariance

    OpenAIRE

    Huo, Xiaoming; Szekely, Gabor J.

    2014-01-01

    Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...

  7. The Many Faces of Computational Artifacts

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Harper, Richard

    2016-01-01

    Building on data from fieldwork at a medical department, this paper focuses on the varied nature of computational artifacts in practice. It shows that medical practice relies on multiple heterogeneous computational artifacts that form complex constellations. In the hospital studied the computatio...

  8. Computational synthetic geometry

    CERN Document Server

    Bokowski, Jürgen

    1989-01-01

    Computational synthetic geometry deals with methods for realizing abstract geometric objects in concrete vector spaces. This research monograph considers a large class of problems from convexity and discrete geometry including constructing convex polytopes from simplicial complexes, vector geometries from incidence structures and hyperplane arrangements from oriented matroids. It turns out that algorithms for these constructions exist if and only if arbitrary polynomial equations are decidable with respect to the underlying field. Besides such complexity theorems a variety of symbolic algorithms are discussed, and the methods are applied to obtain new mathematical results on convex polytopes, projective configurations and the combinatorics of Grassmann varieties. Finally algebraic varieties characterizing matroids and oriented matroids are introduced providing a new basis for applying computer algebra methods in this field. The necessary background knowledge is reviewed briefly. The text is accessible to stud...

  9. Minimized state complexity of quantum-encoded cryptic processes

    Science.gov (United States)

    Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.

    2016-05-01

    The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.

  10. A new decision sciences for complex systems.

    Science.gov (United States)

    Lempert, Robert J

    2002-05-14

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.

  11. Analytical Computation of Information Rate for MIMO Channels

    Directory of Open Access Journals (Sweden)

    Jinbao Zhang

    2017-01-01

    Full Text Available Information rate for discrete signaling constellations is significant. However, the computational complexity makes information rate rather difficult to analyze for arbitrary fading multiple-input multiple-output (MIMO channels. An analytical method is proposed to compute information rate, which is characterized by considerable accuracy, reasonable complexity, and concise representation. These features will improve accuracy for performance analysis with criterion of information rate.

  12. Correlation/Communication complexity of generating bipartite states

    OpenAIRE

    Jain, Rahul; Shi, Yaoyun; Wei, Zhaohui; Zhang, Shengyu

    2012-01-01

    We study the correlation complexity (or equivalently, the communication complexity) of generating a bipartite quantum state $\\rho$. When $\\rho$ is a pure state, we completely characterize the complexity for approximately generating $\\rho$ by a corresponding approximate rank, closing a gap left in Ambainis, Schulman, Ta-Shma, Vazirani and Wigderson (SIAM Journal on Computing, 32(6):1570-1585, 2003). When $\\rho$ is a classical distribution $P(x,y)$, we tightly characterize the complexity of gen...

  13. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  14. A new entropy based method for computing software structural complexity

    CERN Document Server

    Roca, J L

    2002-01-01

    In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relation...

  15. Complex 3D Vortex Lattice Formation by Phase-Engineered Multiple Beam Interference

    Directory of Open Access Journals (Sweden)

    Jolly Xavier

    2012-01-01

    Full Text Available We present the computational results on the formation of diverse complex 3D vortex lattices by a designed superposition of multiple plane waves. Special combinations of multiples of three noncoplanar plane waves with a designed relative phase shift between one another are perturbed by a nonsingular beam to generate various complex 3D vortex lattice structures. The formation of complex gyrating lattice structures carrying designed vortices by means of relatively phase-engineered plane waves is also computationally investigated. The generated structures are configured with both periodic as well as transversely quasicrystallographic basis, while these whirling complex lattices possess a long-range order of designed symmetry in a given plane. Various computational analytical tools are used to verify the presence of engineered geometry of vortices in these complex 3D vortex lattices.

  16. Increasing efficiency of job execution with resource co-allocation in distributed computer systems

    OpenAIRE

    Cankar, Matija

    2014-01-01

    The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...

  17. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  18. Low complexity iterative MLSE equalization in highly spread underwater acoustic channels

    CSIR Research Space (South Africa)

    Myburgh, HC

    2009-05-01

    Full Text Available methods. The superior computational complexity of the proposed equalizer is due to the high parallelism and high level of neuron interconnection of its foundational neural network structure. I. INTRODUCTION In recent years, much attention has been... are practically infeasible, as their computational complexities are exponentially related to the number of interfering symbols, rendering them computationally infeasible for UAC equaliza- tion. Attention has therefore been drawn to developing compu- tationally...

  19. Distributed Information and Control system reliability enhancement by fog-computing concept application

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-03-01

    The paper focuses on the information and control system reliability issue. Authors of the current paper propose a new complex approach of information and control system reliability enhancement by application of the computing concept elements. The approach proposed consists of a complex of optimization problems to be solved. These problems are: estimation of computational complexity, which can be shifted to the edge of the network and fog-layer, distribution of computations among the data processing elements and distribution of computations among the sensors. The problems as well as some simulated results and discussion are formulated and presented within this paper.

  20. Recent developments in complex scaling

    International Nuclear Information System (INIS)

    Rescigno, T.N.

    1980-01-01

    Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible

  1. Computational science: Emerging opportunities and challenges

    International Nuclear Information System (INIS)

    Hendrickson, Bruce

    2009-01-01

    In the past two decades, computational methods have emerged as an essential component of the scientific and engineering enterprise. A diverse assortment of scientific applications has been simulated and explored via advanced computational techniques. Computer vendors have built enormous parallel machines to support these activities, and the research community has developed new algorithms and codes, and agreed on standards to facilitate ever more ambitious computations. However, this track record of success will be increasingly hard to sustain in coming years. Power limitations constrain processor clock speeds, so further performance improvements will need to come from ever more parallelism. This higher degree of parallelism will require new thinking about algorithms, programming models, and architectural resilience. Simultaneously, cutting edge science increasingly requires more complex simulations with unstructured and adaptive grids, and multi-scale and multi-physics phenomena. These new codes will push existing parallelization strategies to their limits and beyond. Emerging data-rich scientific applications are also in need of high performance computing, but their complex spatial and temporal data access patterns do not perform well on existing machines. These interacting forces will reshape high performance computing in the coming years.

  2. Computational Approaches to Nucleic Acid Origami.

    Science.gov (United States)

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  3. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    Science.gov (United States)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M

  4. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  5. Computational Ocean Acoustics

    CERN Document Server

    Jensen, Finn B; Porter, Michael B; Schmidt, Henrik

    2011-01-01

    Since the mid-1970s, the computer has played an increasingly pivotal role in the field of ocean acoustics. Faster and less expensive than actual ocean experiments, and capable of accommodating the full complexity of the acoustic problem, numerical models are now standard research tools in ocean laboratories. The progress made in computational ocean acoustics over the last thirty years is summed up in this authoritative and innovatively illustrated new text. Written by some of the field's pioneers, all Fellows of the Acoustical Society of America, Computational Ocean Acoustics presents the latest numerical techniques for solving the wave equation in heterogeneous fluid–solid media. The authors discuss various computational schemes in detail, emphasizing the importance of theoretical foundations that lead directly to numerical implementations for real ocean environments. To further clarify the presentation, the fundamental propagation features of the techniques are illustrated in color. Computational Ocean A...

  6. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  7. On several computer-oriented studies

    International Nuclear Information System (INIS)

    Takahashi, Ryoichi

    1982-01-01

    To utilize fully digital techniques for solving various difficult problems, nuclear engineers have recourse to computer-oriented approaches. The current trend, in such fields as optimization theory, control system theory and computational fluid dynamics reflect the ability to use computers to obtain numerical solutions to complex problems. Special purpose computers will be used as the integral part of the solving system to process a large amount of data, to implement a control law and even to produce a decision-making. Many problem-solving systems designed in the future will incorporate special-purpose computers as system component. The optimum use of computer system is discussed: why are energy model, energy data base and a big computer used; why will the economic process-computer be allocated to nuclear plants in the future; why should the super-computer be demonstrated at once. (Mori, K.)

  8. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  9. Efficient computation of argumentation semantics

    CERN Document Server

    Liao, Beishui

    2013-01-01

    Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen

  10. ASIC For Complex Fixed-Point Arithmetic

    Science.gov (United States)

    Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.

    1995-01-01

    Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.

  11. Complexity of formation in holography

    International Nuclear Information System (INIS)

    Chapman, Shira; Marrochio, Hugo; Myers, Robert C.

    2017-01-01

    It was recently conjectured that the quantum complexity of a holographic boundary state can be computed by evaluating the gravitational action on a bulk region known as the Wheeler-DeWitt patch. We apply this complexity=action duality to evaluate the ‘complexity of formation’ (DOI: 10.1103/PhysRevLett.116.191301; 10.1103/PhysRevD.93.086006), i.e. the additional complexity arising in preparing the entangled thermofield double state with two copies of the boundary CFT compared to preparing the individual vacuum states of the two copies. We find that for boundary dimensions d>2, the difference in the complexities grows linearly with the thermal entropy at high temperatures. For the special case d=2, the complexity of formation is a fixed constant, independent of the temperature. We compare these results to those found using the complexity=volume duality.

  12. Complexity of formation in holography

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Shira [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada); Marrochio, Hugo [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada); Department of Physics & Astronomy and Guelph-Waterloo Physics Institute,University of Waterloo, Waterloo, ON N2L 3G1 (Canada); Myers, Robert C. [Perimeter Institute for Theoretical Physics,Waterloo, ON N2L 2Y5 (Canada)

    2017-01-16

    It was recently conjectured that the quantum complexity of a holographic boundary state can be computed by evaluating the gravitational action on a bulk region known as the Wheeler-DeWitt patch. We apply this complexity=action duality to evaluate the ‘complexity of formation’ (DOI: 10.1103/PhysRevLett.116.191301; 10.1103/PhysRevD.93.086006), i.e. the additional complexity arising in preparing the entangled thermofield double state with two copies of the boundary CFT compared to preparing the individual vacuum states of the two copies. We find that for boundary dimensions d>2, the difference in the complexities grows linearly with the thermal entropy at high temperatures. For the special case d=2, the complexity of formation is a fixed constant, independent of the temperature. We compare these results to those found using the complexity=volume duality.

  13. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  14. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    Science.gov (United States)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  15. Complex Networks IX

    CERN Document Server

    Coronges, Kate; Gonçalves, Bruno; Sinatra, Roberta; Vespignani, Alessandro; Proceedings of the 9th Conference on Complex Networks; CompleNet 2018

    2018-01-01

    This book aims to bring together researchers and practitioners working across domains and research disciplines to measure, model, and visualize complex networks. It collects the works presented at the 9th International Conference on Complex Networks (CompleNet) 2018 in Boston, MA in March, 2018. With roots in physical, information and social science, the study of complex networks provides a formal set of mathematical methods, computational tools and theories to describe prescribe and predict dynamics and behaviors of complex systems. Despite their diversity, whether the systems are made up of physical, technological, informational, or social networks, they share many common organizing principles and thus can be studied with similar approaches. This book provides a view of the state-of-the-art in this dynamic field and covers topics such as group decision-making, brain and cellular connectivity, network controllability and resiliency, online activism, recommendation systems, and cyber security.

  16. Combustion & Laser Diagnostics Research Complex (CLDRC)

    Data.gov (United States)

    Federal Laboratory Consortium — Description: The Combustion and Laser Diagnostics Research Complex (CLRDC) supports the experimental and computational study of fundamental combustion phenomena to...

  17. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Science.gov (United States)

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  18. Computational disease modeling – fact or fiction?

    Directory of Open Access Journals (Sweden)

    Stephan Klaas

    2009-06-01

    Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.

  19. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  20. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    Science.gov (United States)

    Zerkle, Ronald D.; Prakash, Chander

    1995-01-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  1. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Directory of Open Access Journals (Sweden)

    Khalid M. Hosny

    2012-01-01

    Full Text Available An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add more complexity reduction. The comparison with existing methods was performed, where the numerical experiments and the complexity analysis ensured the efficiency of the proposed method especially with image and objects of large sizes.

  2. Communication Complexity A treasure house of lower bounds

    Indian Academy of Sciences (India)

    Prahladh Harsha TIFR

    Applications. Data structures, VLSI design, time-space tradeoffs, circuit complexity, streaming, auctions, combinatorial optimization . . . Randomized Communication Complexity of INTER: Ω(n). ▷ There is no parallelizable monotone circuit that computes a matching in a given graph ...

  3. Wind power systems. Applications of computational intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lingfeng [Toledo Univ., OH (United States). Dept. of Electrical Engineering and Computer Science; Singh, Chanan [Texas A and M Univ., College Station, TX (United States). Electrical and Computer Engineering Dept.; Kusiak, Andrew (eds.) [Iowa Univ., Iowa City, IA (United States). Mechanical and Industrial Engineering Dept.

    2010-07-01

    Renewable energy sources such as wind power have attracted much attention because they are environmentally friendly, do not produce carbon dioxide and other emissions, and can enhance a nation's energy security. For example, recently more significant amounts of wind power are being integrated into conventional power grids. Therefore, it is necessary to address various important and challenging issues related to wind power systems, which are significantly different from the traditional generation systems. This book is a resource for engineers, practitioners, and decision-makers interested in studying or using the power of computational intelligence based algorithms in handling various important problems in wind power systems at the levels of power generation, transmission, and distribution. Researchers have been developing biologically-inspired algorithms in a wide variety of complex large-scale engineering domains. Distinguished from the traditional analytical methods, the new methods usually accomplish the task through their computationally efficient mechanisms. Computational intelligence methods such as evolutionary computation, neural networks, and fuzzy systems have attracted much attention in electric power systems. Meanwhile, modern electric power systems are becoming more and more complex in order to meet the growing electricity market. In particular, the grid complexity is continuously enhanced by the integration of intermittent wind power as well as the current restructuring efforts in electricity industry. Quite often, the traditional analytical methods become less efficient or even unable to handle this increased complexity. As a result, it is natural to apply computational intelligence as a powerful tool to deal with various important and pressing problems in the current wind power systems. This book presents the state-of-the-art development in the field of computational intelligence applied to wind power systems by reviewing the most up

  4. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  5. Computational physics of the mind

    Science.gov (United States)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  6. Synthetic biology: engineering molecular computers

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Complicated systems cannot survive the rigors of a chaotic environment, without balancing mechanisms that sense, decide upon and counteract the exerted disturbances. Especially so with living organisms, forced by competition to incredible complexities, escalating also their self-controlling plight. Therefore, they compute. Can we harness biological mechanisms to create artificial computing systems? Biology offers several levels of design abstraction: molecular machines, cells, organisms... ranging from the more easily-defined to the more inherently complex. At the bottom of this stack we find the nucleic acids, RNA and DNA, with their digital structure and relatively precise interactions. They are central enablers of designing artificial biological systems, in the confluence of engineering and biology, that we call Synthetic biology. In the first part, let us follow their trail towards an overview of building computing machines with molecules -- and in the second part, take the case study of iGEM Greece 201...

  7. Computer System Analysis for Decommissioning Management of Nuclear Reactor

    International Nuclear Information System (INIS)

    Nurokhim; Sumarbagiono

    2008-01-01

    Nuclear reactor decommissioning is a complex activity that should be planed and implemented carefully. A system based on computer need to be developed to support nuclear reactor decommissioning. Some computer systems have been studied for management of nuclear power reactor. Software system COSMARD and DEXUS that have been developed in Japan and IDMT in Italy used as models for analysis and discussion. Its can be concluded that a computer system for nuclear reactor decommissioning management is quite complex that involved some computer code for radioactive inventory database calculation, calculation module on the stages of decommissioning phase, and spatial data system development for virtual reality. (author)

  8. A Low Complexity Discrete Radiosity Method

    OpenAIRE

    Chatelier , Pierre Yves; Malgouyres , Rémy

    2006-01-01

    International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...

  9. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  10. Quantum control with noisy fields: computational complexity versus sensitivity to noise

    International Nuclear Information System (INIS)

    Kallush, S; Khasin, M; Kosloff, R

    2014-01-01

    A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which has to be suppressed to retain controllability. Can one design control fields such that the effect of noise is negligible on the time-scale of the transformation? This question is intimately related to the fundamental problem of a connection between the computational complexity of the control problem and the sensitivity of the controlled system to noise. The present study considers a paradigm of control, where the Lie-algebraic structure of the control Hamiltonian is fixed, while the size of the system increases with the dimension of the Hilbert space representation of the algebra. We find two types of control tasks, easy and hard. Easy tasks are characterized by a small variance of the evolving state with respect to the operators of the control operators. They are relatively immune to noise and the control field is easy to find. Hard tasks have a large variance, are sensitive to noise and the control field is hard to find. The influence of noise increases with the size of the system, which is measured by the scaling factor N of the largest weight of the representation. For fixed time and control field the ability to control degrades as O(N) for easy tasks and as O(N 2 ) for hard tasks. As a consequence, even in the most favorable estimate, for large quantum systems, generic noise in the controls dominates for a typical class of target transformations, i.e. complete controllability is destroyed by noise. (paper)

  11. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  12. A computational framework for modeling targets as complex adaptive systems

    Science.gov (United States)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  13. Fast generation of computer-generated holograms using wavelet shrinkage.

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  14. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  15. Mathematical structures for computer graphics

    CERN Document Server

    Janke, Steven J

    2014-01-01

    A comprehensive exploration of the mathematics behind the modeling and rendering of computer graphics scenes Mathematical Structures for Computer Graphics presents an accessible and intuitive approach to the mathematical ideas and techniques necessary for two- and three-dimensional computer graphics. Focusing on the significant mathematical results, the book establishes key algorithms used to build complex graphics scenes. Written for readers with various levels of mathematical background, the book develops a solid foundation for graphics techniques and fills in relevant grap

  16. On Perturbative Cubic Nonlinear Schrodinger Equations under Complex Nonhomogeneities and Complex Initial Conditions

    Directory of Open Access Journals (Sweden)

    Magdy A. El-Tawil

    2009-01-01

    Full Text Available A perturbing nonlinear Schrodinger equation is studied under general complex nonhomogeneities and complex initial conditions for zero boundary conditions. The perturbation method together with the eigenfunction expansion and variational parameters methods are used to introduce an approximate solution for the perturbative nonlinear case for which a power series solution is proved to exist. Using Mathematica, the symbolic solution algorithm is tested through computing the possible approximations under truncation procedures. The method of solution is illustrated through case studies and figures.

  17. Development of integrated platform for computational material design

    Energy Technology Data Exchange (ETDEWEB)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato [Center for Computational Science and Engineering, Fuji Research Institute Corporation (Japan); Hideaki, Koike [Advance Soft Corporation (Japan)

    2003-07-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned.

  18. Development of integrated platform for computational material design

    International Nuclear Information System (INIS)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato; Hideaki, Koike

    2003-01-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned

  19. Computational fluid dynamic applications

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  20. Computing Mass Properties From AutoCAD

    Science.gov (United States)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  1. Low Complexity V-BLAST MIMO-OFDM Detector by Successive Iterations Reduction

    Directory of Open Access Journals (Sweden)

    AHMED, K.

    2015-02-01

    Full Text Available V-BLAST detection method suffers large computational complexity due to its successive detection of symbols. In this paper, we propose a modified V-BLAST algorithm to decrease the computational complexity by reducing the number of detection iterations required in MIMO communication systems. We begin by showing the existence of a maximum number of iterations, beyond which, no significant improvement is obtained. We establish a criterion for the number of maximum effective iterations. We propose a modified algorithm that uses the measured SNR to dynamically set the number of iterations to achieve an acceptable bit-error rate. Then, we replace the feedback algorithm with an approximate linear function to reduce the complexity. Simulations show that significant reduction in computational complexity is achieved compared to the ordinary V-BLAST, while maintaining a good BER performance.

  2. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    International Nuclear Information System (INIS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi

    2012-01-01

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  3. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    Energy Technology Data Exchange (ETDEWEB)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi [Wave Inversion and Subsurface Fluid Imaging Research (WISFIR) Lab., Complex System Research Division, Physics Department, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung. and Rock Fluid Imaging Lab., Rock Physics and Cluster C (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia); Physics Department of Institut Teknologi Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung, Indonesia and Institut Teknologi Telkom, Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia)

    2012-06-20

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  4. Lecture 4: Cloud Computing in Large Computer Centers

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    This lecture will introduce Cloud Computing concepts identifying and analyzing its characteristics, models, and applications. Also, you will learn how CERN built its Cloud infrastructure and which tools are been used to deploy and manage it. About the speaker: Belmiro Moreira is an enthusiastic software engineer passionate about the challenges and complexities of architecting and deploying Cloud Infrastructures in ve...

  5. BRAND program complex

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.

    1983-01-01

    A description is given of the structure, input procedure and recording rules of initial data for the BRAND programme complex intended for the Monte Carlo simulation of neutron physics experiments. The BRAND complex ideology is based on non-analogous simulation of the neutron and photon transport process (statistic weights are used, absorption and escape of particles from the considered region is taken into account, shifted readouts from a coordinate part of transition nucleus density are applied, local estimations, etc. are used). The preparation of initial data for three sections is described in detail: general information for Monte Carlo calculation, source definition and data for describing the geometry of the system. The complex is to be processed with the BESM-6 computer, the basic programming lan-- guage is FORTRAN, volume - more than 8000 operators

  6. Plasmonic computing of spatial differentiation

    Science.gov (United States)

    Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui

    2017-05-01

    Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.

  7. Plasmonic computing of spatial differentiation.

    Science.gov (United States)

    Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui

    2017-05-19

    Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.

  8. QUANTUM DISCORD AND QUANTUM COMPUTING - AN APPRAISAL

    OpenAIRE

    Datta, Animesh; Shaji, Anil

    2011-01-01

    We discuss models of computing that are beyond classical. The primary motivation is to unearth the cause of nonclassical advantages in computation. Completeness results from computational complexity theory lead to the identification of very disparate problems, and offer a kaleidoscopic view into the realm of quantum enhancements in computation. Emphasis is placed on the `power of one qubit' model, and the boundary between quantum and classical correlations as delineated by quantum discord. A ...

  9. Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra

    Science.gov (United States)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.

  10. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...

  11. A study of the advantages & disadvantages of mobile cloud computing versus native environment

    OpenAIRE

    Almrot, Emil; Andersson, Sebastian

    2013-01-01

    The advent of cloud computing has enabled the possibility of moving complex calculations and device operations to the “cloud” in an effective way. With cloud computing being applied to mobile devices some of the primary constraints of mobile computing, such as battery life and hardware with less computational power, could be resolved by moving complex operations and computations to the cloud. This thesis aims to identify advantages and disadvantages associated with running cloud based applica...

  12. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  13. Quantum simulations with noisy quantum computers

    Science.gov (United States)

    Gambetta, Jay

    Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.

  14. Computational aspects of feedback in neural circuits.

    Directory of Open Access Journals (Sweden)

    Wolfgang Maass

    2007-01-01

    Full Text Available It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also

  15. Science Prospects And Benefits with Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kothe, Douglas B [ORNL

    2007-12-01

    Scientific computation has come into its own as a mature technology in all fields of science. Never before have we been able to accurately anticipate, analyze, and plan for complex events that have not yet occurred from the operation of a reactor running at 100 million degrees centigrade to the changing climate a century down the road. Combined with the more traditional approaches of theory and experiment, scientific computation provides a profound tool for insight and solution as we look at complex systems containing billions of components. Nevertheless, it cannot yet do all we would like. Much of scientific computation s potential remains untapped in areas such as materials science, Earth science, energy assurance, fundamental science, biology and medicine, engineering design, and national security because the scientific challenges are far too enormous and complex for the computational resources at hand. Many of these challenges are of immediate global importance. These challenges can be overcome by a revolution in computing that promises real advancement at a greatly accelerated pace. Planned petascale systems (capable of a petaflop, or 1015 floating point operations per second) in the next 3 years and exascale systems (capable of an exaflop, or 1018 floating point operations per second) in the next decade will provide an unprecedented opportunity to attack these global challenges through modeling and simulation. Exascale computers, with a processing capability similar to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. Table ES.1 summarizes these scientific opportunities, their key application areas, and the goals and associated benefits that would result from solutions afforded by exascale computing.

  16. Nonlinear optical and G-Quadruplex DNA stabilization properties of novel mixed ligand copper(II) complexes and coordination polymers: Synthesis, structural characterization and computational studies

    Science.gov (United States)

    Rajasekhar, Bathula; Bodavarapu, Navya; Sridevi, M.; Thamizhselvi, G.; RizhaNazar, K.; Padmanaban, R.; Swu, Toka

    2018-03-01

    The present study reports the synthesis and evaluation of nonlinear optical property and G-Quadruplex DNA Stabilization of five novel copper(II) mixed ligand complexes. They were synthesized from copper(II) salt, 2,5- and 2,3- pyridinedicarboxylic acid, diethylenetriamine and amide based ligand (AL). The crystal structure of these complexes were determined through X-ray diffraction and supported by ESI-MAS, NMR, UV-Vis and FT-IR spectroscopic methods. Their nonlinear optical property was studied using Gaussian09 computer program. For structural optimization and nonlinear optical property, density functional theory (DFT) based B3LYP method was used with LANL2DZ basis set for metal ion and 6-31G∗ for C,H,N,O and Cl atoms. The present work reveals that pre-polarized Complex-2 showed higher β value (29.59 × 10-30e.s.u) as compared to that of neutral complex-1 (β = 0.276 × 10-30e.s.u.) which may be due to greater advantage of polarizability. Complex-2 is expected to be a potential material for optoelectronic and photonic technologies. Docking studies using AutodockVina revealed that complex-2 has higher binding energy for both G-Quadruplex DNA (-8.7 kcal/mol) and duplex DNA (-10.1 kcal/mol). It was also observed that structure plays an important role in binding efficiency.

  17. Multi-agent and complex systems

    CERN Document Server

    Ren, Fenghui; Fujita, Katsuhide; Zhang, Minjie; Ito, Takayuki

    2017-01-01

    This book provides a description of advanced multi-agent and artificial intelligence technologies for the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field. A complex system features a large number of interacting components, whose aggregate activities are nonlinear and self-organized. A multi-agent system is a group or society of agents which interact with others cooperatively and/or competitively in order to reach their individual or common goals. Multi-agent systems are suitable for modeling and simulation of complex systems, which is difficult to accomplish using traditional computational approaches.

  18. Computing Tropical Varieties

    DEFF Research Database (Denmark)

    Speyer, D.; Jensen, Anders Nedergaard; Bogart, T.

    2005-01-01

    The tropical variety of a d-dimensional prime ideal in a polynomial ring with complex coefficients is a pure d-dimensional polyhedral fan. This fan is shown to be connected in codimension one. We present algorithmic tools for computing the tropical variety, and we discuss our implementation...

  19. Thinking in complexity the complex dynamics of matter, mind, and mankind

    CERN Document Server

    Mainzer, Klaus

    1994-01-01

    The theory of nonlinear complex systems has become a successful and widely used problem-solving approach in the natural sciences - from laser physics, quantum chaos and meteorology to molecular modeling in chemistry and computer simulations of cell growth in biology In recent times it has been recognized that many of the social, ecological and political problems of mankind are also of a global, complex and nonlinear nature And one of the most exciting topics of present scientific and public interest is the idea that even the human mind is governed largely by the nonlinear dynamics of complex systems In this wide-ranging but concise treatment Prof Mainzer discusses, in nontechnical language, the common framework behind these endeavours Special emphasis is given to the evolution of new structures in natural and cultural systems and it is seen clearly how the new integrative approach of complexity theory can give new insights that were not available using traditional reductionistic methods

  20. Multi-agent evolutionary systems for the generation of complex virtual worlds

    Directory of Open Access Journals (Sweden)

    J. Kruse

    2016-01-01

    Full Text Available Modern films, games and virtual reality applications are dependent on convincing computer graphics. Highly complex models are a requirement for the successful delivery of many scenes and environments. While workflows such as rendering, compositing and animation have been streamlined to accommodate increasing demands, modelling complex models is still a laborious task. This paper introduces the computational benefits of an Interactive Genetic Algorithm (IGA to computer graphics modelling while compensating the effects of user fatigue, a common issue with Interactive Evolutionary Computation. An intelligent agent is used in conjunction with an IGA that offers the potential to reduce the effects of user fatigue by learning from the choices made by the human designer and directing the search accordingly. This workflow accelerates the layout and distribution of basic elements to form complex models. It captures the designer’s intent through interaction, and encourages playful discovery.

  1. Reactor protection system design using micro-computers

    International Nuclear Information System (INIS)

    Fairbrother, D.B.

    1977-01-01

    Reactor Protection Systems for Nuclear Power Plants have traditionally been built using analog hardware. This hardware works quite well for single parameter trip functions; however, optimum protection against DNBR and KW/ft limits requires more complex trip functions than can easily be handled with analog hardware. For this reason, Babcock and Wilcox has introduced a Reactor Protection System, called the RPS-II, that utilizes a micro-computer to handle the more complex trip functions. This paper describes the design of the RPS-II and the operation of the micro-computer within the Reactor Protection System

  2. Reactor protection system design using micro-computers

    International Nuclear Information System (INIS)

    Fairbrother, D.B.

    1976-01-01

    Reactor protection systems for nuclear power plants have traditionally been built using analog hardware. This hardware works quite well for single parameter trip functions; however, optimum protection against DNBR and KW/ft limits requires more complex trip functions than can easily be handled with analog hardware. For this reason, Babcock and Wilcox has introduced a Reactor Protection System, called the RPS-II, that utilizes a micro-computer to handle the more complex trip functions. The paper describes the design of the RPS-II and the operation of the micro-computer within the Reactor Protection System

  3. computation of the stability constants for the inclusion complexes of

    African Journals Online (AJOL)

    PrF MU

    stability constants of ADA with β-CD with the use of MO as an auxiliary agent were evaluated. ..... The latter statement can also be strengthened by the computed species distribution diagram for .... and 5-position of the adamantyl body.

  4. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  5. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  6. Computational Science: Ensuring America's Competitiveness

    National Research Council Canada - National Science Library

    Reed, Daniel A; Bajcsy, Ruzena; Fernandez, Manuel A; Griffiths, Jose-Marie; Mott, Randall D; Dongarra, J. J; Johnson, Chris R; Inouye, Alan S; Miner, William; Matzke, Martha K; Ponick, Terry L

    2005-01-01

    Computational science is now indispensable to the solution of complex problems in every sector, from traditional science and engineering domains to such key areas as national security, public health...

  7. Computational chemistry and metal-based radiopharmaceuticals

    International Nuclear Information System (INIS)

    Neves, M.; Fausto, R.

    1998-01-01

    Computer-assisted techniques have found extensive use in the design of organic pharmaceuticals but have not been widely applied on metal complexes, particularly on radiopharmaceuticals. Some examples of computer generated structures of complexes of In, Ga and Tc with N, S, O and P donor ligands are referred. Besides parameters directly related with molecular geometries, molecular properties of the predicted structures, as ionic charges or dipole moments, are considered to be related with biodistribution studies. The structure of a series of oxo neutral Tc-biguanide complexes are predicted by molecular mechanics calculations, and their interactions with water molecules or peptide chains correlated with experimental data of partition coefficients and percentage of human protein binding. The results stress the interest of using molecular modelling to predict molecular properties of metal-based radiopharmaceuticals, which can be successfully correlated with results of in vitro studies. (author)

  8. Planning Complex Projects Automatically

    Science.gov (United States)

    Henke, Andrea L.; Stottler, Richard H.; Maher, Timothy P.

    1995-01-01

    Automated Manifest Planner (AMP) computer program applies combination of artificial-intelligence techniques to assist both expert and novice planners, reducing planning time by orders of magnitude. Gives planners flexibility to modify plans and constraints easily, without need for programming expertise. Developed specifically for planning space shuttle missions 5 to 10 years ahead, with modifications, applicable in general to planning other complex projects requiring scheduling of activities depending on other activities and/or timely allocation of resources. Adaptable to variety of complex scheduling problems in manufacturing, transportation, business, architecture, and construction.

  9. East-West paths to unconventional computing.

    Science.gov (United States)

    Adamatzky, Andrew; Akl, Selim; Burgin, Mark; Calude, Cristian S; Costa, José Félix; Dehshibi, Mohammad Mahdi; Gunji, Yukio-Pegio; Konkoli, Zoran; MacLennan, Bruce; Marchal, Bruno; Margenstern, Maurice; Martínez, Genaro J; Mayne, Richard; Morita, Kenichi; Schumann, Andrew; Sergeyev, Yaroslav D; Sirakoulis, Georgios Ch; Stepney, Susan; Svozil, Karl; Zenil, Hector

    2017-12-01

    Unconventional computing is about breaking boundaries in thinking, acting and computing. Typical topics of this non-typical field include, but are not limited to physics of computation, non-classical logics, new complexity measures, novel hardware, mechanical, chemical and quantum computing. Unconventional computing encourages a new style of thinking while practical applications are obtained from uncovering and exploiting principles and mechanisms of information processing in and functional properties of, physical, chemical and living systems; in particular, efficient algorithms are developed, (almost) optimal architectures are designed and working prototypes of future computing devices are manufactured. This article includes idiosyncratic accounts of 'unconventional computing' scientists reflecting on their personal experiences, what attracted them to the field, their inspirations and discoveries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Is the bipyridyl thorium metallocene a low-valent thorium complex? A combined experimental and computational study

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Wenshan; Lukens, Wayne W.; Zi, Guofu; Maron, Laurent; Walter, Marc D.

    2012-01-12

    Bipyridyl thorium metallocenes [5-1,2,4-(Me3C)3C5H2]2Th(bipy) (1) and [5-1,3-(Me3C)2C5H3]2Th(bipy) (2) have been investigated by magnetic susceptibility and computational studies. The magnetic susceptibility data reveal that 1 and 2 are not diamagnetic, but they behave as temperature independent paramagnets (TIPs). To rationalize this observation, density functional theory (DFT) and complete active space SCF (CASSCF) calculations have been undertaken, which indicated that Cp2Th(bipy) has indeed a Th(IV)(bipy2-) ground state (f0d0 2, S = 0), but the open-shell singlet (f0d1 1, S = 0) (almost degenerate with its triplet congener) is lying only 9.2 kcal/mol higher in energy. Complexes 1 and 2 react cleanly with Ph2CS to give [ 5-1,2,4-(Me3C)3C5H2]2Th[(bipy)(SCPh2)] (3) and [ 5-1,3-(Me3C)2C5H3]2Th[(bipy)(SCPh2)] (4), respectively, in quantitative conversions. Since no intermediates were observed experimentally, this reaction was also studied computationally. Coordination of Ph2CS to 2 in its S = 0 ground state is not possible, but Ph2CS can coordinate to 2 in its triplet state (S = 1) upon which a single electron transfer (SET) from the (bipy2-) fragment to Ph2CS followed by C-C coupling takes place.

  11. Computer systems and nuclear industry

    International Nuclear Information System (INIS)

    Nkaoua, Th.; Poizat, F.; Augueres, M.J.

    1999-01-01

    This article deals with computer systems in nuclear industry. In most nuclear facilities it is necessary to handle a great deal of data and of actions in order to help plant operator to drive, to control physical processes and to assure the safety. The designing of reactors requires reliable computer codes able to simulate neutronic or mechanical or thermo-hydraulic behaviours. Calculations and simulations play an important role in safety analysis. In each of these domains, computer systems have progressively appeared as efficient tools to challenge and master complexity. (A.C.)

  12. Limits on efficient computation in the physical world

    Science.gov (United States)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  13. A computer program to calculate nuclide yields in complex decay chain for selection of optimum irradiation and cooling condition

    International Nuclear Information System (INIS)

    Takeda, Tsuneo

    1977-11-01

    This report is prepared as a user's input manual for a computer code CODAC-No.5 and provides a general description of the code and instructions for its use. The code represents a modified version of the CODAC-No.4 code. The code developed is capable of calculating radioactive nuclide yields in an any given complex decay and activation chain independent of irradiation history. In this code, eighteen kinds of valuable tables and graphs can be prepared for output. They are available for selection of optimum irradiation and cooling conditions and for other intentions in accordance with irradiation and cooling. For a example, the ratio of a nuclide yield to total nuclide yield depending on irradiation and cooling times is obtained. In these outputs, several kinds of complex and intricate equations and others are included. This code has almost the same input forms as that of CODAC-No.4 code excepting input of irradiation history data. Input method and formats used for this code are very simple for any kinds of nuclear data. List of FORTRAN statements, examples of input data and output results and list of input parameters and its definitions are given in this report. (auth.)

  14. Knowledge-based computer security advisor

    International Nuclear Information System (INIS)

    Hunteman, W.J.; Squire, M.B.

    1991-01-01

    The rapid expansion of computer security information and technology has included little support to help the security officer identify the safeguards needed to comply with a policy and to secure a computing system. This paper reports that Los Alamos is developing a knowledge-based computer security system to provide expert knowledge to the security officer. This system includes a model for expressing the complex requirements in computer security policy statements. The model is part of an expert system that allows a security officer to describe a computer system and then determine compliance with the policy. The model contains a generic representation that contains network relationships among the policy concepts to support inferencing based on information represented in the generic policy description

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  16. Low-complexity R-peak detection for ambulatory fetal monitoring

    International Nuclear Information System (INIS)

    Rooijakkers, Michael J; Rabotti, Chiara; Mischi, Massimo; Oei, S Guid

    2012-01-01

    Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity. (paper)

  17. Low-complexity R-peak detection for ambulatory fetal monitoring.

    Science.gov (United States)

    Rooijakkers, Michael J; Rabotti, Chiara; Oei, S Guid; Mischi, Massimo

    2012-07-01

    Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.

  18. Sandpile model for relaxation in complex systems

    International Nuclear Information System (INIS)

    Vazquez, A.; Sotolongo-Costa, O.; Brouers, F.

    1997-10-01

    The relaxation in complex systems is, in general, nonexponential. After an initial rapid decay the system relaxes slowly following a long time tail. In the present paper a sandpile moderation of the relaxation in complex systems is analysed. Complexity is introduced by a process of avalanches in the Bethe lattice and a feedback mechanism which leads to slower decay with increasing time. In this way, some features of relaxation in complex systems: long time tails relaxation, aging, and fractal distribution of characteristic times, are obtained by simple computer simulations. (author)

  19. Dynamics of complexation of a charged dendrimer by linear polyelectrolyte: Computer modelling

    NARCIS (Netherlands)

    Lyulin, S.V.; Darinskii, A.A.; Lyulin, A.V.

    2007-01-01

    Brownian-dynamics simulations have been performed for complexes formed by a charged dendrimer and a long oppositely charged linear polyelectrolyte when overcharging phenomenon is always observed. After a complex formation the orientational mobility of the individual dendrimer bonds, the fluctuations

  20. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  1. Beyond CMOS computing with spin and polarization

    Science.gov (United States)

    Manipatruni, Sasikanth; Nikonov, Dmitri E.; Young, Ian A.

    2018-04-01

    Spintronic and multiferroic systems are leading candidates for achieving attojoule-class logic gates for computing, thereby enabling the continuation of Moore's law for transistor scaling. However, shifting the materials focus of computing towards oxides and topological materials requires a holistic approach addressing energy, stochasticity and complexity.

  2. The Nash Equilibrium Revisited: Chaos and Complexity Hidden in Simplicity

    Science.gov (United States)

    Fellman, Philip V.

    The Nash Equilibrium is a much discussed, deceptively complex, method for the analysis of non-cooperative games (McLennan and Berg, 2005). If one reads many of the commonly available definitions the description of the Nash Equilibrium is deceptively simple in appearance. Modern research has discovered a number of new and important complex properties of the Nash Equilibrium, some of which remain as contemporary conundrums of extraordinary difficulty and complexity (Quint and Shubik, 1997). Among the recently discovered features which the Nash Equilibrium exhibits under various conditions are heteroclinic Hamiltonian dynamics, a very complex asymptotic structure in the context of two-player bi-matrix games and a number of computationally complex or computationally intractable features in other settings (Sato, Akiyama and Farmer, 2002). This paper reviews those findings and then suggests how they may inform various market prediction strategies.

  3. European Conference on Complex Systems 2012

    CERN Document Server

    Kirkilionis, Markus; Nicolis, Gregoire

    2013-01-01

    The European Conference on Complex Systems, held under the patronage of the Complex Systems Society, is an annual event that has become the leading European conference devoted to complexity science. ECCS'12, its ninth edition, took place in Brussels, during the first week of September 2012. It gathered about 650 scholars representing a wide range of topics relating to complex systems research, with emphasis on interdisciplinary approaches. More specifically, the following tracks were covered:  1. Foundations of Complex Systems 2. Complexity, Information and Computation 3. Prediction, Policy and Planning, Environment 4. Biological Complexity 5. Interacting Populations, Collective Behavior 6. Social Systems, Economics and Finance This book contains a selection of the contributions presented at the conference and its satellite meetings. Its contents reflect the extent, diversity and richness of research areas in the field, both fundamental and applied.  

  4. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-01-01

    complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder's computational complexity. We show that when the computational complexity exceeds

  5. Factors affecting learning of vector math from computer-based practice: Feedback complexity and prior knowledge

    Directory of Open Access Journals (Sweden)

    Andrew F. Heckler

    2016-06-01

    Full Text Available In experiments including over 450 university-level students, we studied the effectiveness and time efficiency of several levels of feedback complexity in simple, computer-based training utilizing static question sequences. The learning domain was simple vector math, an essential skill in introductory physics. In a unique full factorial design, we studied the relative effects of “knowledge of correct response” feedback and “elaborated feedback” (i.e., a general explanation both separately and together. A number of other factors were analyzed, including training time, physics course grade, prior knowledge of vector math, and student beliefs about both their proficiency in and the importance of vector math. We hypothesize a simple model predicting how the effectiveness of feedback depends on prior knowledge, and the results confirm this knowledge-by-treatment interaction. Most notably, elaborated feedback is the most effective feedback, especially for students with low prior knowledge and low course grade. In contrast, knowledge of correct response feedback was less effective for low-performing students, and including both kinds of feedback did not significantly improve performance compared to elaborated feedback alone. Further, while elaborated feedback resulted in higher scores, the learning rate was at best only marginally higher because the training time was slightly longer. Training time data revealed that students spent significantly more time on the elaborated feedback after answering a training question incorrectly. Finally, we found that training improved student self-reported proficiency and that belief in the importance of the learned domain improved the effectiveness of training. Overall, we found that computer based training with static question sequences and immediate elaborated feedback in the form of simple and general explanations can be an effective way to improve student performance on a physics essential skill

  6. Digitized adiabatic quantum computing with a superconducting circuit.

    Science.gov (United States)

    Barends, R; Shabani, A; Lamata, L; Kelly, J; Mezzacapo, A; Las Heras, U; Babbush, R; Fowler, A G; Campbell, B; Chen, Yu; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Lucero, E; Megrant, A; Mutus, J Y; Neeley, M; Neill, C; O'Malley, P J J; Quintana, C; Roushan, P; Sank, D; Vainsencher, A; Wenner, J; White, T C; Solano, E; Neven, H; Martinis, John M

    2016-06-09

    Quantum mechanics can help to solve complex problems in physics and chemistry, provided they can be programmed in a physical device. In adiabatic quantum computing, a system is slowly evolved from the ground state of a simple initial Hamiltonian to a final Hamiltonian that encodes a computational problem. The appeal of this approach lies in the combination of simplicity and generality; in principle, any problem can be encoded. In practice, applications are restricted by limited connectivity, available interactions and noise. A complementary approach is digital quantum computing, which enables the construction of arbitrary interactions and is compatible with error correction, but uses quantum circuit algorithms that are problem-specific. Here we combine the advantages of both approaches by implementing digitized adiabatic quantum computing in a superconducting system. We tomographically probe the system during the digitized evolution and explore the scaling of errors with system size. We then let the full system find the solution to random instances of the one-dimensional Ising problem as well as problem Hamiltonians that involve more complex interactions. This digital quantum simulation of the adiabatic algorithm consists of up to nine qubits and up to 1,000 quantum logic gates. The demonstration of digitized adiabatic quantum computing in the solid state opens a path to synthesizing long-range correlations and solving complex computational problems. When combined with fault-tolerance, our approach becomes a general-purpose algorithm that is scalable.

  7. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    Science.gov (United States)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  8. Computational study of formamide-water complexes using the SAPT and AIM methods

    International Nuclear Information System (INIS)

    Parreira, Renato L.T.; Valdes, Haydee; Galembeck, Sergio E.

    2006-01-01

    In this work, the complexes formed between formamide and water were studied by means of the SAPT and AIM methods. Complexation leads to significant alterations in the geometries and electronic structure of formamide. Intermolecular interactions in the complexes are intense, especially in the cases where the solvent interacts with the carbonyl and amide groups simultaneously. In the transition states, the interaction between the water molecule and the lone pair on the amide nitrogen is also important. In all the complexes studied herein, the electrostatic interactions between formamide and water are the main attractive force, and their contribution may be five times as large as the corresponding contribution from dispersion, and twice as large as the contribution from induction. However, an increase in the resonance of planar formamide with the successive addition of water molecules may suggest that the hydrogen bonds taking place between formamide and water have some covalent character

  9. Chaos, complexity, and random matrices

    Science.gov (United States)

    Cotler, Jordan; Hunter-Jones, Nicholas; Liu, Junyu; Yoshida, Beni

    2017-11-01

    Chaos and complexity entail an entropic and computational obstruction to describing a system, and thus are intrinsically difficult to characterize. In this paper, we consider time evolution by Gaussian Unitary Ensemble (GUE) Hamiltonians and analytically compute out-of-time-ordered correlation functions (OTOCs) and frame potentials to quantify scrambling, Haar-randomness, and circuit complexity. While our random matrix analysis gives a qualitatively correct prediction of the late-time behavior of chaotic systems, we find unphysical behavior at early times including an O(1) scrambling time and the apparent breakdown of spatial and temporal locality. The salient feature of GUE Hamiltonians which gives us computational traction is the Haar-invariance of the ensemble, meaning that the ensemble-averaged dynamics look the same in any basis. Motivated by this property of the GUE, we introduce k-invariance as a precise definition of what it means for the dynamics of a quantum system to be described by random matrix theory. We envision that the dynamical onset of approximate k-invariance will be a useful tool for capturing the transition from early-time chaos, as seen by OTOCs, to late-time chaos, as seen by random matrix theory.

  10. On the Complexity of Computing Optimal Private Park-and-Ride Plans

    DEFF Research Database (Denmark)

    Olsen, Martin

    2013-01-01

    or size of the parties inputs. This is despite the fact that in many cases the size of a party’s input can be confidential. The reason for this omission seems to have been the folklore belief that, as with encryption, it is impossible to carry out non-trivial secure computation while hiding the size...... that the folklore belief may not be fully accurate. In this work, we initiate a theoretical study of input-size hiding secure computation, and focus on the two-party case. We present definitions for this task, and deal with the subtleties that arise in the setting where there is no a priori polynomial bound...

  11. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    Science.gov (United States)

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  12. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  13. Hybrid RANS/LES applied to complex terrain

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Sørensen, Niels N.

    2011-01-01

    Large Eddy Simulation (LES) of the wind in complex terrain is limited by computational cost. The number of computational grid points required to resolve the near-ground turbulent structures (eddies) are very high. The traditional solution to the problem has been to apply a wall function...... aspect ratio in the RANS layer and thereby resolve the mean near-wall velocity profile. The method is applicable to complex terrain and the benefits of traditional LES are kept intact. Using the hybrid method, simulations of the wind over a natural complex terrain near Wellington in New Zealand...... that accounts for the whole near-wall region. Recently, a hybrid method was proposed in which the eddies close to the ground were modelled in a Reynolds-averaged sense (RANS) and the eddies above this region were simulated using LES. The advantage of the approach is the ability to use shallow cells of high...

  14. Complexity and Control: Towards a Rigorous Behavioral Theory of Complex Dynamical Systems

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    We introduce our motive for writing this book on complexity and control with a popular "complexity myth," which seems to be quite wide spread among chaos and complexity theory fashionistas: quote>Low-dimensional systems usually exhibit complex behaviours (which we know fromMay's studies of the Logisticmap), while high-dimensional systems usually exhibit simple behaviours (which we know from synchronisation studies of the Kuramoto model)...quote> We admit that this naive view on complex (e.g., human) systems versus simple (e.g., physical) systems might seem compelling to various technocratic managers and politicians; indeed, the idea makes for appealing sound-bites. However, it is enough to see both in the equations and computer simulations of pendula of various degree - (i) a single pendulum, (ii) a double pendulum, and (iii) a triple pendulum - that this popular myth is plain nonsense. The only thing that we can learn from it is what every tyrant already knows: by using force as a strong means of control, it is possible to effectively synchronise even hundreds of millions of people, at least for a while.

  15. From Computational Thinking to Computational Empowerment: A 21st Century PD Agenda

    DEFF Research Database (Denmark)

    Iversen, Ole Sejer; Smith, Rachel Charlotte; Dindler, Christian

    2018-01-01

    We propose computational empowerment as an approach, and a Participatory Design response, to challenges related to digitalization of society and the emerging need for digital literacy in K12 education. Our approach extends the current focus on computational thinking to include contextual, human-c...... technology in education. We argue that PD has the potential to drive a computational empowerment agenda in education, by connecting political PD with contemporary visions for addressing a future digitalized labor market and society.......We propose computational empowerment as an approach, and a Participatory Design response, to challenges related to digitalization of society and the emerging need for digital literacy in K12 education. Our approach extends the current focus on computational thinking to include contextual, human......-centred and societal challenges and impacts involved in students’ creative and critical engagement with digital technology. Our research is based on the FabLab@School project, in which a PD approach to computational empowerment provided opportunities as well as further challenges for the complex agenda of digital...

  16. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  17. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    Science.gov (United States)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  18. Real-Time Thevenin Impedance Computation

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Jóhannsson, Hjörtur

    2013-01-01

    operating state, and strict time constraints are difficult to adhere to as the complexity of the grid increases. Several suggested approaches for real-time stability assessment require Thevenin impedances to be determined for the observed system conditions. By combining matrix factorization, graph reduction......, and parallelization, we develop an algorithm for computing Thevenin impedances an order of magnitude faster than previous approaches. We test the factor-and-solve algorithm with data from several power grids of varying complexity, and we show how the algorithm allows realtime stability assessment of complex power...

  19. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  20. Scattering methods in complex fluids

    CERN Document Server

    Chen, Sow-Hsin

    2015-01-01

    Summarising recent research on the physics of complex liquids, this in-depth analysis examines the topic of complex liquids from a modern perspective, addressing experimental, computational and theoretical aspects of the field. Selecting only the most interesting contemporary developments in this rich field of research, the authors present multiple examples including aggregation, gel formation and glass transition, in systems undergoing percolation, at criticality, or in supercooled states. Connecting experiments and simulation with key theoretical principles, and covering numerous systems including micelles, micro-emulsions, biological systems, and cement pastes, this unique text is an invaluable resource for graduate students and researchers looking to explore and understand the expanding field of complex fluids.