WorldWideScience

Sample records for large-scale complex titanium

  1. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  2. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  3. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  4. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  5. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  6. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  7. Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools

    Directory of Open Access Journals (Sweden)

    M. Anushka S. Perera

    2015-07-01

    Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.

  8. On the principles of microstructure scale development for titanium alloys

    International Nuclear Information System (INIS)

    Kolachev, B.A.; Mal'kov, A.V.; Gus'kova, L.N.

    1982-01-01

    Analysis of an existing standard scale of microstructures for two-phase (α+#betta#)-titanium alloy semiproducts is given. The basic principles of development of control microstructure scales for titanium alloys are presented on the base of investigations and generalization of literature data on connection of microstructure of titanium intermediate products from (α+#betta#)-alloys with their mechanical properties and service life characteristics. A possibilities of changing mechanical and operating properties at the expense of obtaining qualitatively and quantitatively regulated microstructure in the alloy are disclosed on the example of the (α+#betta#)-titanium alloy

  9. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  10. Evaluation model of project complexity for large-scale construction projects in Iran - A Fuzzy ANP approach

    Directory of Open Access Journals (Sweden)

    Aliyeh Kazemi

    2016-09-01

    Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.

  11. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  12. Titanium disilicide formation by sputtering of titanium on heated silicon substrate

    Science.gov (United States)

    Tanielian, M.; Blackstone, S.

    1984-09-01

    We have sputter deposited titanium on bare silicon substrates at elevated temperatures. We find that at a substrate temperature of about 515 °C titanium silicide is formed due to the reaction of the titanium with the Si. The resistivity of the silicide is about 15 μΩ cm and it is not etchable in a selective titanium etch. This process can have applications in low-temperature, metal-oxide-semiconductor self-aligned silicide formation for very large scale integrated

  13. Infrared spectroscopy and density functional calculations on titanium-dinitrogen complexes

    Science.gov (United States)

    Yoo, Hae-Wook; Choi, Changhyeok; Cho, Soo Gyeong; Jung, Yousung; Choi, Myong Yong

    2018-04-01

    Titanium-nitrogen complexes were generated by laser ablated titanium (Ti) atoms and N2 gas molecules in this study. These complexes were isolated on the pre-deposited solid Ar matrix on the pre-cooled KBr window (T ∼ 5.4 K), allowing infrared spectra to be measured. Laser ablation experiments with 15N2 isotope provided distinct isotopic shifts in the infrared spectra that strongly implicated the formation of titanium-nitrogen complexes, Ti(NN)x. Density functional theory (DFT) calculations were employed to investigate the molecular structures, electronic ground state, relative energies, and IR frequencies of the anticipated Ti(NN)x complexes. Based on laser ablation experiments and DFT calculations, we were able to assign multiple Ti(NN)x (x = 1-6) species. Particularly, Ti(NN)5 and Ti(NN)6, which have high nitrogen content, may serve as good precursors in preparing polynitrogens.

  14. [Complex skull defects reconstruction with САD/САМ titanium and polyetheretherketone (PEEK) implants].

    Science.gov (United States)

    Eolchiyan, S A

    2014-01-01

    Predictable and stable functional and aesthetic result is the aim of priority for the neurosurgeon dealing with the reconstruction of large cranial bone defects and complex-formed skull defects involving cranio-orbital region. the paper presents the experience with САD/САМ titanium and polyetheretherketone (PEEK) implants for complex-formed and large skull bone defects reconstruction. Between 2005 and 2013 nine patients (5 females and 4 males) underwent cranioplasty and cranio-facial reconstruction with insertion of the customized САD/САМ titanium and PEEK implants. Computer-assisted preoperative planning was undertaken by the surgeon and the engineer together in 3 cases to provide accurate implant design. Eight patients had complex-formed and large posttraumatic defects of fronto-orbital (7 cases) and parietal (one case) regions. In two of these cases one-step reconstruction surgery for posttraumatic fronto-orbital defects combined with adjacent orbital roof (one case) and orbito-zygomatic (one case) deformities was performed. One patient underwent one-step primary cranioplasty after cranio-orbital fibrous dysplasia focus resection. Titanium implants were used in 4 cases while PEEK implants - in 5 ones. The follow-up period ranged from 6 months till 8,5 years (median 4,4 years). The accuracy of the implant intraoperative fit was perfect in all cases. Postoperative wounds healed primary and there were no any complications in the series presented. Post-op clinical assessment and CT data testified to high implants precision, good functional and aesthetic outcomes in all patients. САD/САМ titanium and PEEK implants application should allow for optimal reconstruction in the challenging patients with complex-formed and large skull bone defects providing predictable good functional and aesthetic result together with surgery morbidity and duration reduction. Computer-assisted preoperative planning should be undertaken for САD/САМ implants creation in

  15. Studies on combined model based on functional objectives of large scale complex engineering

    Science.gov (United States)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  16. Complex compound polyvinyl alcohol-titanic acid/titanium dioxide

    Science.gov (United States)

    Prosanov, I. Yu.

    2013-02-01

    A complex compound polyvinyl alcohol-titanic acid has been produced and investigated by means of IR and Raman spectroscopy, X-ray diffraction, and synchronous thermal analysis. It is claimed that it represents an interpolymeric complex of polyvinyl alcohol and hydrated titanium oxide.

  17. Synthesis of Cyclododecatriene from 1,3-Butadiene by Trimerization over Amine-Titanium Complex Catalyst

    International Nuclear Information System (INIS)

    Park, Da Min; Kim, Gye Ryung; Lee, Ju Hyun; Kim, Geon-Joong; Cho, Deuk Hee

    2013-01-01

    The new complex catalysts were synthesized by the reaction of titanium compounds (titanium chloride or titanium butoxide) and diamines in this work, and they showed very high catalytic activities for the cyclododecatriene (CDT) synthesis from 1,3-butadiene through trimerization. CDT synthetic reaction was performed in an autoclave reactor, and the effects of reaction temperature, type of catalyst, catalyst amount added into the system, the mole ratio of Al/Ti and immobilization method were investigated on the yield of product CDT. The titanium complex catalyst combined to diamine with 1:1 ratio showed high selectivity to CDT more than 90%. The ratio of TTT-CDT/TTC-CDT isomers in the product revealed as different values, depending on the type of diamine combined to titanium and Ti/diamine ratios. Those homogeneous complexes could be used as a heterogenized catalyst after anchoring on the supports, and the immobilized titanium catalyst retained the catalytic activities for several times in the recycled reactions without leaching. The carbon support containing titanium has exhibited superior activity to the silica support. Especially, when the titanium complex was anchored on the support which was fabricated by the hydrolysis of tripropylaminosilane itself, the resulting titanium catalyst showed the highest BD conversion and CDT selectivity

  18. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  19. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  20. Arylimido zirconium and titanium complexes. Characteristic structures and application in ethylene polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Shifang; Zhang, Jing [Shanxi Univ., Taiyuan (China). Inst. of Applied Chemistry; Wang, Lijing [Shanxi Univ., Taiyuan (China). School of Chemistry and Chemical Engineering; Hua, Yupeng [Shanxi Univ., Taiyuan (China). School of Chemistry and Chemical Engineering; Inner Mongolia Univ., Ordos (China). College of Ordos; Sun, Wen-Hua [Chinese Academy of Sciences, Beijing (China). Key Laboratory of Engineering Plastics

    2016-07-01

    Dimeric anilidolithium (ArHNLi.Et{sub 2}O){sub 2} (Ar=2,6-{sup i}Pr{sub 2}C{sub 6}H{sub 3}) reacted with zirconium tetrachloride in THF to give the heterometallic zirconium-lithium complex [(Et{sub 2}O){sub 2}Li(μ-Cl){sub 2}(ArHN)(ArN=)Zr(μ-Cl)]{sub 2} (C1) and with titanium tetrachloride in toluene to give the titanium complex [(ArN=)TiCl{sub 2}.(Et{sub 2}O){sub 2}] (C2) each in good isolated yields. Their molecular structures in the solid state were confirmed by X-ray diffraction analysis. Upon activation with methylaluminoxane, both arylimido zirconium and titanium complexes exhibited good catalytic activities toward ethylene polymerization.

  1. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... data types and co-interpret them in order to improve our geological understanding. However, in order to perform this successfully, methodological considerations are necessary. For instance, a structure indicated by a reflection in the seismic data is not always apparent in the resistivity data...... information) can be collected. The geophysical data are used together with geological analyses from boreholes and pits to interpret the geological history of the hill-island. The geophysical data reveal that the glaciotectonic structures truncate at the surface. The directions of the structures were mapped...

  2. Complex Formation Control of Large-Scale Intelligent Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Lei

    2012-01-01

    Full Text Available A new formation framework of large-scale intelligent autonomous vehicles is developed, which can realize complex formations while reducing data exchange. Using the proposed hierarchy formation method and the automatic dividing algorithm, vehicles are automatically divided into leaders and followers by exchanging information via wireless network at initial time. Then, leaders form formation geometric shape by global formation information and followers track their own virtual leaders to form line formation by local information. The formation control laws of leaders and followers are designed based on consensus algorithms. Moreover, collision-avoiding problems are considered and solved using artificial potential functions. Finally, a simulation example that consists of 25 vehicles shows the effectiveness of theory.

  3. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    Science.gov (United States)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These

  4. A new large-scale manufacturing platform for complex biopharmaceuticals.

    Science.gov (United States)

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  5. ct-DNA Binding and Antibacterial Activity of Octahedral Titanium (IV Heteroleptic (Benzoylacetone and Hydroxamic Acids Complexes

    Directory of Open Access Journals (Sweden)

    Raj Kaushal

    2016-01-01

    Full Text Available Five structurally related titanium (IV heteroleptic complexes, [TiCl2(bzac(L1–4] and [TiCl3(bzac(HL5]; bzac = benzoylacetonate; L1–5 = benzohydroximate (L1, salicylhydroximate (L2, acetohydroximate (L3, hydroxyurea (L4, and N-benzoyl-N-phenyl hydroxylamine (L5, were used for the assessment of their antibacterial activities against ten pathogenic bacterial strains. The titanium (IV complexes (1–5 demonstrated significant level of antibacterial properties as measured using agar well diffusion method. UV-Vis absorption spectroscopic technique was applied, to get a better insight into the nature of binding between titanium (IV complexes with calf thymus DNA (ct-DNA. On the basis of the results of UV-Vis absorption spectroscopy, the interaction between ct-DNA and the titanium (IV complexes is likely to occur through the same mode. Results indicated that titanium (IV complex can bind to calf thymus DNA (ct-DNA via an intercalative mode. The intrinsic binding constant (Kb was calculated by absorption spectra by using Benesi-Hildebrand equation. Further, Gibbs free energy was also calculated for all the complexes.

  6. Peroxy-Titanium Complex-based inks for low temperature compliant anatase thin films.

    Science.gov (United States)

    Shabanov, N S; Asvarov, A Sh; Chiolerio, A; Rabadanov, K Sh; Isaev, A B; Orudzhev, F F; Makhmudov, S Sh

    2017-07-15

    Stable highly crystalline titanium dioxide colloids are of paramount importance for the establishment of a solution-processable library of materials that could help in bringing the advantages of digital printing to the world of photocatalysis and solar energy conversion. Nano-sized titanium dioxide in the anatase phase was synthesized by means of hydrothermal methods and treated with hydrogen peroxide to form Peroxy-Titanium Complexes (PTCs). The influence of hydrogen peroxide on the structural, optical and rheological properties of titanium dioxide and its colloidal solutions were assessed and a practical demonstration of a low temperature compliant digitally printed anatase thin film given. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  8. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  9. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  10. The use of titanium dioxide micro-columns to selectively isolate phosphopeptides from proteolytic digests

    DEFF Research Database (Denmark)

    Thingholm, Tine E; Larsen, Martin R

    2009-01-01

    Titanium dioxide has very high affinity for phosphopeptides and it has become an efficient alternative to already existing methods for phosphopeptide enrichment from complex samples. Peptide loading in a highly acidic environment in the presence of 2,5-dihydroxybenzoic acid (DHB), phthalic acid......, or glycolic acid has been shown to improve selectivity significantly by reducing unspecific binding from nonphosphorylated peptides. The enriched phosphopeptides bound to the titanium dioxide are subsequently eluted from the micro-column using an alkaline buffer. Titanium dioxide chromatography is extremely...... tolerant towards most buffers used in biological experiments. It is highly robust and as such it has become one of the methods of choice in large-scale phospho-proteomics. Here we describe the protocol for phosphopeptide enrichment using titanium dioxide chromatography followed by desalting...

  11. Titanium application to power plant condensers

    International Nuclear Information System (INIS)

    Itoh, H.

    1987-01-01

    Recently, the growth of operating performance and construction plan of titanium-tubed condensers in thermal and unclear power plants has been very impressive. High-quality, thinner welded titanium tubes used for cooling tubes, matching design specifications of condensers, have been stably supplied through mass production. It now can be said that various technical problems for titanium-tubed condensers have been solved, but data on operating performance in large-scale commercial plants are still scarce, and site-by-site information needs be exchanged more frequently and on a larger scale. Projects to replace existing condenser cooling tubes with those of corrosion-resistant titanium have been actively furthered, with the only remaining barrier to full employment being cost effectiveness. It is hoped that condenser and tube manufacturers will conduct more joint value analyses

  12. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  13. A sourcebook of titanium alloy superconductivity

    CERN Document Server

    Collings, E W

    1983-01-01

    In less than two decades the concept of supercon­ In every field of science there are one or two ductivity has been transformed from a laboratory individuals whose dedication, combined with an innate curiosity to usable large-scale applications. In the understanding, permits them to be able to grasp, late 1960's the concept of filamentary stabilization condense, and explain to the rest of us what that released the usefulness of zero resistance into the field is all about. For the field of titanium alloy marketplace, and the economic forces that drive tech­ superconductivity, such an individual is Ted Collings. nology soon focused on niobium-titanium alloys. They His background as a metallurgist has perhaps given him are ductile and thus fabricable into practical super­ a distinct advantage in understanding superconduc­ conducting wires that have the critical currents and tivity in titanium alloys because the optimization of fields necessary for large-scale devices. More than superconducting parameters in ...

  14. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem; Kammoun, Abla; Sanguinetti, Luca; Debbah, Merouane; Alouini, Mohamed-Slim

    2016-01-01

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity

  15. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  16. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  17. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  18. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  19. Alkyne Hydroamination and Trimerization with Titanium Bis(phenolate)pyridine Complexes: Evidence for Low-Valent Titanium Intermediates and Synthesis of an Ethylene Adduct of Titanium(II)

    KAUST Repository

    Tonks, Ian A.

    2013-06-24

    A class of titanium precatalysts of the type (ONO)TiX2 (ONO = pyridine-2,6-bis(4,6-di-tert-butylphenolate); X = Bn, NMe2) has been synthesized and crystallographically characterized. The (ONO)TiX2 (X = Bn, NMe2, X2 = NPh) complexes are highly active precatalysts for the hydroamination of internal alkynes with primary arylamines and some alkylamines. A class of titanium imido/ligand adducts, (ONO)Ti(L)(NR) (L = HNMe2, py; R = Ph, tBu), have also been synthesized and characterized and provide structural analogues to intermediates on the purported catalytic cycle. Furthermore, these complexes exhibit unusual redox behavior. (ONO)TiBn2 (1) promotes the cyclotrimerization of electron-rich alkynes, likely via a catalytically active TiII species that is generated in situ from 1. Depending on reaction conditions, these TiII species are proposed to be generated through Ti benzylidene or imido intermediates. A formally TiII complex, (ONO)Ti II(η2-C2H4)(HNMe2) (7), has been prepared and structurally characterized. © 2013 American Chemical Society.

  20. Alkyne Hydroamination and Trimerization with Titanium Bis(phenolate)pyridine Complexes: Evidence for Low-Valent Titanium Intermediates and Synthesis of an Ethylene Adduct of Titanium(II)

    KAUST Repository

    Tonks, Ian A.; Meier, Josef C.; Bercaw, John E.

    2013-01-01

    A class of titanium precatalysts of the type (ONO)TiX2 (ONO = pyridine-2,6-bis(4,6-di-tert-butylphenolate); X = Bn, NMe2) has been synthesized and crystallographically characterized. The (ONO)TiX2 (X = Bn, NMe2, X2 = NPh) complexes are highly active precatalysts for the hydroamination of internal alkynes with primary arylamines and some alkylamines. A class of titanium imido/ligand adducts, (ONO)Ti(L)(NR) (L = HNMe2, py; R = Ph, tBu), have also been synthesized and characterized and provide structural analogues to intermediates on the purported catalytic cycle. Furthermore, these complexes exhibit unusual redox behavior. (ONO)TiBn2 (1) promotes the cyclotrimerization of electron-rich alkynes, likely via a catalytically active TiII species that is generated in situ from 1. Depending on reaction conditions, these TiII species are proposed to be generated through Ti benzylidene or imido intermediates. A formally TiII complex, (ONO)Ti II(η2-C2H4)(HNMe2) (7), has been prepared and structurally characterized. © 2013 American Chemical Society.

  1. Synthesis and crystal structures of dinuclear trichloro(tetramethylcyclopentadienyl) titanium complexes

    Czech Academy of Sciences Publication Activity Database

    Lukešová, Lenka; Gyepes, R.; Varga, V.; Pinkas, Jiří; Horáček, Michal; Kubišta, Jiří; Mach, Karel

    2006-01-01

    Roč. 71, č. 2 (2006), s. 164-178 ISSN 0010-0765 R&D Projects: GA AV ČR KJB400400602 Institutional research plan: CEZ:AV0Z40400503 Keywords : titanium * dinuclear complexes * half-sandwich titanocene * silicon bridge Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.881, year: 2006

  2. New insights into the complex photoluminescence behaviour of titanium white pigments

    NARCIS (Netherlands)

    van Driel, B.A.; Artesani, A.; van den Berg, Klaas Jan; Dik, J.; Mosca, S.; Rossenaar, B.; Hoekstra, J.; Davies, A.; Nevin, A.; Valentini, G.; Comelli, D.

    2018-01-01

    This work reports the analysis of the time-resolved photoluminescence behaviour on the nanosecond and microsecond time scale of fourteen historical and contemporary titanium white pigments. The pigments were produced with different production methods and post-production treatments, giving rise to

  3. Controlled synthesis of titania using water-soluble titanium complexes: A review

    Science.gov (United States)

    Truong, Quang Duc; Dien, Luong Xuan; Vo, Dai-Viet N.; Le, Thanh Son

    2017-07-01

    The development of human society has led to the increase in energy and resources consumption as well as the arising problems of environmental damage and the toxicity to the human health. The development of novel synthesis method which tolerates utilization of toxic solvents and chemicals would fulfill the demand of the society for safer, softer, and environmental friendly technologies. For the past decades, a remarkable progress has been attained in the development of new water-soluble titanium complexes (WSTC) and their use for the synthesis of nanocrystalline titanium dioxide materials by aqueous solution-based approaches. The progress of synthesis of nanocrystalline titanium dioxide using such WSTCs is reviewed in this work. The key structural features responsible for the successfully controlled synthesis of TiO2 are discussed to provide guidelines for the morphology-controlled synthesis. Finally, this review ends with a summary and some perspectives on the challenges as well as new directions in this fascinating research.

  4. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  5. Max-Min SINR in Large-Scale Single-Cell MU-MIMO: Asymptotic Analysis and Low Complexity Transceivers

    KAUST Repository

    Sifaou, Houssem

    2016-12-28

    This work focuses on the downlink and uplink of large-scale single-cell MU-MIMO systems in which the base station (BS) endowed with M antennas communicates with K single-antenna user equipments (UEs). Particularly, we aim at reducing the complexity of the linear precoder and receiver that maximize the minimum signal-to-interference-plus-noise ratio subject to a given power constraint. To this end, we consider the asymptotic regime in which M and K grow large with a given ratio. Tools from random matrix theory (RMT) are then used to compute, in closed form, accurate approximations for the parameters of the optimal precoder and receiver, when imperfect channel state information (modeled by the generic Gauss-Markov formulation form) is available at the BS. The asymptotic analysis allows us to derive the asymptotically optimal linear precoder and receiver that are characterized by a lower complexity (due to the dependence on the large scale components of the channel) and, possibly, by a better resilience to imperfect channel state information. However, the implementation of both is still challenging as it requires fast inversions of large matrices in every coherence period. To overcome this issue, we apply the truncated polynomial expansion (TPE) technique to the precoding and receiving vector of each UE and make use of RMT to determine the optimal weighting coefficients on a per- UE basis that asymptotically solve the max-min SINR problem. Numerical results are used to validate the asymptotic analysis in the finite system regime and to show that the proposed TPE transceivers efficiently mimic the optimal ones, while requiring much lower computational complexity.

  6. Workshop Report on Additive Manufacturing for Large-Scale Metal Components - Development and Deployment of Metal Big-Area-Additive-Manufacturing (Large-Scale Metals AM) System

    Energy Technology Data Exchange (ETDEWEB)

    Babu, Sudarsanam Suresh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Love, Lonnie J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Peter, William H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Dehoff, Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility

    2016-05-01

    Additive manufacturing (AM) is considered an emerging technology that is expected to transform the way industry can make low-volume, high value complex structures. This disruptive technology promises to replace legacy manufacturing methods for the fabrication of existing components in addition to bringing new innovation for new components with increased functional and mechanical properties. This report outlines the outcome of a workshop on large-scale metal additive manufacturing held at Oak Ridge National Laboratory (ORNL) on March 11, 2016. The charter for the workshop was outlined by the Department of Energy (DOE) Advanced Manufacturing Office program manager. The status and impact of the Big Area Additive Manufacturing (BAAM) for polymer matrix composites was presented as the background motivation for the workshop. Following, the extension of underlying technology to low-cost metals was proposed with the following goals: (i) High deposition rates (approaching 100 lbs/h); (ii) Low cost (<$10/lbs) for steel, iron, aluminum, nickel, as well as, higher cost titanium, (iii) large components (major axis greater than 6 ft) and (iv) compliance of property requirements. The above concept was discussed in depth by representatives from different industrial sectors including welding, metal fabrication machinery, energy, construction, aerospace and heavy manufacturing. In addition, DOE’s newly launched High Performance Computing for Manufacturing (HPC4MFG) program was reviewed. This program will apply thermo-mechanical models to elucidate deeper understanding of the interactions between design, process, and materials during additive manufacturing. Following these presentations, all the attendees took part in a brainstorming session where everyone identified the top 10 challenges in large-scale metal AM from their own perspective. The feedback was analyzed and grouped in different categories including, (i) CAD to PART software, (ii) selection of energy source, (iii

  7. Titanium condenser tubes. Problems and their solution for wider application to large surface condensers. [PWR

    Energy Technology Data Exchange (ETDEWEB)

    Sato, S; Sugiyama, S; Nagata, K; Nanba, K; Shimono, M [Sumitomo Light Metal Industries Ltd., Tokyo (Japan)

    1977-06-01

    The corrosion resistance of titanium in sea water is extremely excellent, but titanium tubes are expensive, and the copper alloy tubes resistant in polluted sea water were developed, therefore they were not used practically. In 1970, ammonia attack was found on the copper alloy tubes in the air-cooled portion of condensers, and titanium tubes have been used as the countermeasure. As the result of the use, the galvanic attack on copper alloy tube plates with titanium tubes as cathode and the hydrogen absorption at titanium tube ends owing to excess electrolytic protection was observed, but the corrosion resistance of titanium tubes was perfect. These problems can be controlled by the application of proper electrolytic protection. The condensers with all titanium tubes adopted recently in USA are intended to realize perfectly no-leak condensers as the countermeasure to the corrosion in steam generators of PWR plants. Regarding large condensers of nowadays, three problems are pointed out, namely the vibration of condenser tubes, the method of joining tubes and tube plates, and the tubes of no coolant leak. These three problems in case of titanium tubes were studied, and the problem of the fouling of tubes was also examined. The intervals of supporting plates for titanium tubes should be narrowed. The joining of titanium tubes and titanium tube plates by welding is feasible and promising. The cleaning with sponge balls is effective to control fouling.

  8. Organizational Influences on Interdisciplinary Interactions during Research and Design of Large-Scale Complex Engineered Systems

    Science.gov (United States)

    McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.

    2012-01-01

    The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.

  9. Control protocol: large scale implementation at the CERN PS complex - a first assessment

    International Nuclear Information System (INIS)

    Abie, H.; Benincasa, G.; Coudert, G.; Davydenko, Y.; Dehavay, C.; Gavaggio, R.; Gelato, G.; Heinze, W.; Legras, M.; Lustig, H.; Merard, L.; Pearson, T.; Strubin, P.; Tedesco, J.

    1994-01-01

    The Control Protocol is a model-based, uniform access procedure from a control system to accelerator equipment. It was proposed at CERN about 5 years ago and prototypes were developed in the following years. More recently, this procedure has been finalized and implemented at a large scale in the PS Complex. More than 300 pieces of equipment are now using this protocol in normal operation and another 300 are under implementation. These include power converters, vacuum systems, beam instrumentation devices, RF equipment, etc. This paper describes how the single general procedure is applied to the different kinds of equipment. The advantages obtained are also discussed. ((orig.))

  10. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  11. Chromatographic retention of molybdenum, titanium and uranium complexes for removal of some interferences in inductively-coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Jiang, S.-J.; Palmieri, M.D.; Fritz, J.S.; Houk, R.S.; Iowa State Univ., of Science and Technology, Ames

    1987-01-01

    Complexes of molybdenum(VI) or titanium(IV) with N-methylfurohydroxamic acid (N-MFHA) are retained on a column packed with polystyrene/divinylbenzene. At the pH values chosen, copper, zinc and cadmium are washed rapidly through the column and are detected by inductively-coupled plasma mass spectrometry without interference from metal oxide ions of titanium or molybdenum. Detection limits are 1 to 2 μg l -1 , and analyte recoveries are essentially 100%. The resin capacity for the titanium and molybdenum complexes is sufficient for several hundred injections, and the complexes can be readily washed from the column. Uranium(VI) also forms a stable complex with N-MFHA, and ionization interference caused by excess of uranium can be avoided by chromatographic removal of the uranium complex. Various other potentially interfering elements with aqueous oxidation states of +4 or higher (e.g. Sn, W, Hf or Zr) could also be separated by this technique. 33 refs.; 4 figs.; 3 tabs

  12. Complex dewetting scenarios of ultrathin silicon films for large-scale nanoarchitectures.

    Science.gov (United States)

    Naffouti, Meher; Backofen, Rainer; Salvalaglio, Marco; Bottein, Thomas; Lodari, Mario; Voigt, Axel; David, Thomas; Benkouider, Abdelmalek; Fraj, Ibtissem; Favre, Luc; Ronda, Antoine; Berbezier, Isabelle; Grosso, David; Abbarchi, Marco; Bollani, Monica

    2017-11-01

    Dewetting is a ubiquitous phenomenon in nature; many different thin films of organic and inorganic substances (such as liquids, polymers, metals, and semiconductors) share this shape instability driven by surface tension and mass transport. Via templated solid-state dewetting, we frame complex nanoarchitectures of monocrystalline silicon on insulator with unprecedented precision and reproducibility over large scales. Phase-field simulations reveal the dominant role of surface diffusion as a driving force for dewetting and provide a predictive tool to further engineer this hybrid top-down/bottom-up self-assembly method. Our results demonstrate that patches of thin monocrystalline films of metals and semiconductors share the same dewetting dynamics. We also prove the potential of our method by fabricating nanotransfer molding of metal oxide xerogels on silicon and glass substrates. This method allows the novel possibility of transferring these Si-based patterns on different materials, which do not usually undergo dewetting, offering great potential also for microfluidic or sensing applications.

  13. Mean-square displacement of atomic complex in titanium carbonitrides TiCxNy

    International Nuclear Information System (INIS)

    Khidirov, I.; Sultanova, S.Kh.; Mukhtarova, N.N.; Tokhtashev, B.

    2004-01-01

    Full text: The atomic mean-square displacement (MSD) is one of important characteristics of solids, and one can use it for determination of a number of other characteristics of substances. In this work the MSD of atomic complex were determined for a number of compositions of the cubic titanium carbonitrides TiC x N y using the neutron powder diffraction data. The error of MSD determination was less than 3 %. When determining intensity of diffraction maximum a correction for the thermal diffusion dispersion (TDD) was included in the neutron diffraction patterns. The contribution of TDD in the intensity of diffraction maxima was found to be less than the experiment error (no more than 1,5 %). Such small value of the TDD correction is explained by refractory of materials. The values of MSD in titanium carbonitrides for a number of compositions, determined by the neutron powder diffraction measurements, are given. It is shown, that the dependence of MSD on the concentration (C+N)/Ti has a complex character. With decrease of the total content of metalloids MSD decreases at first, reaching a minimum about concentration (C+N)/Ti≅0.80, and then increases. MSD consists of dynamic and static distortions, where the static distortions in the compounds with variable composition increase with increasing of deviation from stoichiometry. The above anomaly in the dependence of MSD on the total concentration of metalloids, apparently, point to prevalence of dynamic distortions over static ones and to complex character of concentration dependence of interatomic interactions in the titanium carbonitrides. This work was supported by the Supporting Fund for Fundamental Researches of Uzbekistan Academy of Sciences (Grant No. 6-04)

  14. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  15. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  16. Repair of large frontal temporal parietal skull defect with digitally reconstructed titanium mesh: a report of 20 cases

    Directory of Open Access Journals (Sweden)

    Gang-ge CHENG

    2013-09-01

    Full Text Available Objective To explore the clinical effect and surgical technique of the repair of large defect involving frontal, temporal, and parietal regions using digitally reconstructed titanium mesh. Methods Twenty patients with large frontal, temporal, and parietal skull defect hospitalized in Air Force General Hospital from November 2006 to May 2012 were involved in this study. In these 20 patients, there were 13 males and 7 females, aged 18-58 years (mean 39 years, and the defect size measured from 7.0cm×9.0cm to 11.5cm×14.0cm (mean 8.5cm×12.0cm. Spiral CT head scan and digital three-dimensional reconstruction of skull were performed in all the patients. The shape and geometric size of skull defect was traced based on the symmetry principle, and then the data were transferred into digital precision lathe to reconstruct a titanium mesh slightly larger (1.0-1.5cm than the skull defect, and the finally the prosthesis was perfected after pruning the border. Cranioplasty was performed 6-12 months after craniotomy using the digitally reconstructed titanium mesh. Results The digitally reconstructed titanium mesh was used in 20 patients with large frontal, temporal, parietal skull defect. The surgical technique was relatively simple, and the surgical duration was shorter than before. The titanium mesh fit to the defect of skull accurately with satisfactory molding effect, good appearance and symmetrical in shape. No related complication was found in all the patients. Conclusion Repair of large frontal, temporal, parietal skull defect with digitally reconstructed titanium mesh is more advantageous than traditional manual reconstruction, and it can improve the life quality of patients.

  17. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  18. Retention of habitat complexity minimizes disassembly of reef fish communities following disturbance: a large-scale natural experiment.

    Directory of Open Access Journals (Sweden)

    Michael J Emslie

    Full Text Available High biodiversity ecosystems are commonly associated with complex habitats. Coral reefs are highly diverse ecosystems, but are under increasing pressure from numerous stressors, many of which reduce live coral cover and habitat complexity with concomitant effects on other organisms such as reef fishes. While previous studies have highlighted the importance of habitat complexity in structuring reef fish communities, they employed gradient or meta-analyses which lacked a controlled experimental design over broad spatial scales to explicitly separate the influence of live coral cover from overall habitat complexity. Here a natural experiment using a long term (20 year, spatially extensive (∼ 115,000 kms(2 dataset from the Great Barrier Reef revealed the fundamental importance of overall habitat complexity for reef fishes. Reductions of both live coral cover and habitat complexity had substantial impacts on fish communities compared to relatively minor impacts after major reductions in coral cover but not habitat complexity. Where habitat complexity was substantially reduced, species abundances broadly declined and a far greater number of fish species were locally extirpated, including economically important fishes. This resulted in decreased species richness and a loss of diversity within functional groups. Our results suggest that the retention of habitat complexity following disturbances can ameliorate the impacts of coral declines on reef fishes, so preserving their capacity to perform important functional roles essential to reef resilience. These results add to a growing body of evidence about the importance of habitat complexity for reef fishes, and represent the first large-scale examination of this question on the Great Barrier Reef.

  19. Titanium

    Science.gov (United States)

    Woodruff, Laurel G.; Bedinger, George M.; Piatak, Nadine M.; Schulz, Klaus J.; DeYoung,, John H.; Seal, Robert R.; Bradley, Dwight C.

    2017-12-19

    Titanium is a mineral commodity that is essential to the smooth functioning of modern industrial economies. Most of the titanium produced is refined into titanium dioxide, which has a high refractive index and is thus able to impart a durable white color to paint, paper, plastic, rubber, and wallboard. Because of their high strength-to-weight ratio and corrosion resistance, titanium metal and titanium metal alloys are used in the aerospace industry as well as for welding rod coatings, biological implants, and consumer goods.Ilmenite and rutile are currently the principal titanium-bearing ore minerals, although other minerals, including anatase, perovskite, and titanomagnetite, could have economic importance in the future. Ilmenite is currently being mined from two large magmatic deposits hosted in rocks of Proterozoic-age anorthosite plutonic suites. Most rutile and nearly one-half of the ilmenite produced are from heavy-mineral alluvial, fluvial, and eolian deposits. Titanium-bearing minerals occur in diverse geologic settings, but many of the known deposits are currently subeconomic for titanium because of complications related to the mineralogy or because of the presence of trace contaminants that can compromise the pigment production process.Global production of titanium minerals is currently dominated by Australia, Canada, Norway, and South Africa; additional amounts are produced in Brazil, India, Madagascar, Mozambique, Sierra Leone, and Sri Lanka. The United States accounts for about 4 percent of the total world production of titanium minerals and is heavily dependent on imports of titanium mineral concentrates to meet its domestic needs.Titanium occurs only in silicate or oxide minerals and never in sulfide minerals. Environmental considerations for titanium mining are related to waste rock disposal and the impact of trace constituents on water quality. Because titanium is generally inert in the environment, human health risks from titanium and titanium

  20. Formation and Thermal Stability of Large Precipitates and Oxides in Titanium and Niobium Microalloyed Steel

    Institute of Scientific and Technical Information of China (English)

    ZHUO Xiao-jun; WOO Dae-hee; WANG Xin-hua; LEE Hae-geon

    2008-01-01

    As-cast CC slabs of microalloyed steels are prone to surface and sub-surface cracking. Precipitation phenomena in-itiated during solidification reduce ductility at high temperature. The unidirectional solidification unit is employed to sim-ulate the solidification process during continuous casting. Precipitation behavior and thermal stability are systemati-cally investigated. Samples of adding titanium and niobium to steels have been examined using field emission scanning electron microscope (FE-SEM), electron probe X-ray microanalyzer (EPMA), and transmission electron microscope (TEM). It has been found that the addition of titanium and niobium to high-strength low-alloyed (HSLA) steel resuited in undesirable large precipitation in the steels, i. e. , precipitation of large precipitates with various morphologies. The composition of the large precipitates has been determined. The effect of cooling rate on (Ti, Nb)(C, N) precipitate formation is investigated. With increasing the cooling rate, titanium-rich (Ti,Nb)(C, N) precipitates are transformed to niobium-rich (Ti,Nb)(C,N) precipitates. The thermal stability of these large precipitates and oxides have been assessed by carrying out various heat treatments such as holding and quenching from temperature at 800 and 1 200 ℃. It has been found that titanium-rich (Ti,Nb)(C,N) precipitate is stable at about 1 200 ℃ and niobi-um-rich (Ti,Nb)(C,N) precipitate is stable at about 800 ℃. After reheating at 1 200 ℃ for 1 h, (Ca, Mn)S and TiN are precipitated from Ca-Al oxide. However, during reheating at 800 ℃ for 1 h, Ca-Al-Ti oxide in specimens was stable. The thermodynamic calculation of simulating the thermal process is employed. The calculation results are in good agreement with the experimental results.

  1. The coordination and atom transfer chemistry of titanium porphyrin complexes

    Energy Technology Data Exchange (ETDEWEB)

    Hays, James Allen [Iowa State Univ., Ames, IA (United States)

    1993-11-05

    Preparation, characterization, and reactivity of (η2- alkyne)(meso-tetratolylpoprphrinato)titanium(II) complexes are described, along with inetermetal oxygen atom transfer reactions involving Ti(IV) and Ti(III) porphyrin complexes. The η2- alkyne complexes are prepared by reaction of (TTP)TiCl2 with LiAlH4 in presence of alkyne. Structure of (OEP)Ti(η2-Ph-C≡C-Ph) (OEP=octaethylporphryin) was determined by XRD. The compounds undergo simple substitution to displace the alkyne and produce doubly substituted complexes. Structure of (TTP)Ti(4-picoline)2 was also determined by XRD. Reaction of (TTP)Ti=O with (OEP)Ti-Cl yields intermetal O/Cl exchange, which is a one-electron redox process mediated by O atom transfer. Also a zero-electron redox process mediated by atom transfer is observed when (TTP)TiCl2 is reacted with (OEP)Ti=O.

  2. Chitosan patterning on titanium alloys

    OpenAIRE

    Gilabert Chirivella, Eduardo; Pérez Feito, Ricardo; Ribeiro, Clarisse; Ribeiro, Sylvie; Correia, Daniela; González Martin, María Luisa; Manero Planella, José María; Lanceros Méndez, Senentxu; Gallego Ferrer, Gloria; Gómez Ribelles, José Luis

    2017-01-01

    Titanium and its alloys are widely used in medical implants because of their excellent properties. However, bacterial infection is a frequent cause of titanium-based implant failure and also compromises its osseointegration. In this study, we report a new simple method of providing titanium surfaces with antibacterial properties by alternating antibacterial chitosan domains with titanium domains in the micrometric scale. Surface microgrooves were etched on pure titanium disks at i...

  3. Managing Risk and Uncertainty in Large-Scale University Research Projects

    Science.gov (United States)

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  4. In search of low cost titanium: the fray farthing chen (FFC) cambridge process

    CSIR Research Space (South Africa)

    Oosthuizen, SJ

    2011-03-01

    Full Text Available delivering a sponge product, aimed at replacing Kroll sponge alone, does not have potential for large reduction in overall titanium cost. Significant cost savings can be achieved only by also reducing the large number of process steps required... to process the sponge to mill product, including sponge purification, comminution, electrode forming, vacuum arc re-melting, and hot and cold rolling. Presently, economy of scale in the production of titanium dictates that it is most cost...

  5. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  6. Cold Spraying of Armstrong Process Titanium Powder for Additive Manufacturing

    Science.gov (United States)

    MacDonald, D.; Fernández, R.; Delloro, F.; Jodoin, B.

    2017-04-01

    Titanium parts are ideally suited for aerospace applications due to their unique combination of high specific strength and excellent corrosion resistance. However, titanium as bulk material is expensive and challenging/costly to machine. Production of complex titanium parts through additive manufacturing looks promising, but there are still many barriers to overcome before reaching mainstream commercialization. The cold gas dynamic spraying process offers the potential for additive manufacturing of large titanium parts due to its reduced reactive environment, its simplicity to operate, and the high deposition rates it offers. A few challenges are to be addressed before the additive manufacturing potential of titanium by cold gas dynamic spraying can be reached. In particular, it is known that titanium is easy to deposit by cold gas dynamic spraying, but the deposits produced are usually porous when nitrogen is used as the carrier gas. In this work, a method to manufacture low-porosity titanium components at high deposition efficiencies is revealed. The components are produced by combining low-pressure cold spray using nitrogen as the carrier gas with low-cost titanium powder produced using the Armstrong process. The microstructure and mechanical properties of additive manufactured titanium components are investigated.

  7. Magnetic storm generation by large-scale complex structure Sheath/ICME

    Science.gov (United States)

    Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.

    2017-12-01

    We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch

  8. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  9. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    Science.gov (United States)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  10. Cell Attachment Following Instrumentation with Titanium and Plastic Instruments, Diode Laser, and Titanium Brush on Titanium, Titanium-Zirconium, and Zirconia Surfaces.

    Science.gov (United States)

    Lang, Melissa S; Cerutis, D Roselyn; Miyamoto, Takanari; Nunn, Martha E

    2016-01-01

    The aim of this study was to evaluate the surface characteristics and gingival fibroblast adhesion of disks composed of implant and abutment materials following brief and repeated instrumentation with instruments commonly used in procedures for implant maintenance, stage-two implant surgery, and periimplantitis treatment. One hundred twenty disks (40 titanium, 40 titaniumzirconium, 40 zirconia) were grouped into treatment categories of instrumentation by plastic curette, titanium curette, diode microlaser, rotary titanium brush, and no treatment. Twenty strokes were applied to half of the disks in the plastic and titanium curette treatment categories, while half of the disks received 100 strokes each to simulate implant maintenance occurring on a repetitive basis. Following analysis of the disks by optical laser profilometry, disks were cultured with human gingival fibroblasts. Cell counts were conducted from scanning electron microscopy (SEM) images. Differences in surface roughness across all instruments tested for zirconia disks were negligible, while both titanium disks and titaniumzirconium disks showed large differences in surface roughness across the spectrum of instruments tested. The rotary titanium brush and the titanium curette yielded the greatest overall mean surface roughness, while the plastic curette yielded the lowest mean surface roughness. The greatest mean cell counts for each disk type were as follows: titanium disks with plastic curettes, titanium-zirconium disks with titanium curettes, and zirconia disks with the diode microlaser. Repeated instrumentation did not result in cumulative changes in surface roughness of implant materials made of titanium, titanium-zirconium, or zirconia. Instrumentation with plastic implant curettes on titanium and zirconia surfaces appeared to be more favorable than titanium implant curettes in terms of gingival fibroblast attachment on these surfaces.

  11. In search of low cost titanium: the Fray Farthing Chen (FFC) Cambridge process

    CSIR Research Space (South Africa)

    Oosthuizen, SJ

    2010-10-01

    Full Text Available delivering a sponge product, aimed at replacing Kroll sponge alone, does not have potential for large reduction in overall titanium cost. Significant cost savings can only be achieved by also reducing the large number of process steps required to process... the sponge to mill product, including sponge purification, comminution, electrode forming, Vacuum Arc Re-melting, Hot and cold rolling. Presently, economy of scale in the production of titanium dictates that it is most cost effective to cast the largest...

  12. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  13. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  14. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  15. Radius scaling of titanium wire arrays on the Z accelerator

    International Nuclear Information System (INIS)

    Coverdale, C.A.; Denney, C.; Spielman, R.B.

    1999-01-01

    The 20 MA Z accelerator has made possible the generation of substantial radiation (> 100 kJ) at higher photon energies (4.8 keV) through the use of titanium wire arrays. In this paper, the results of experiments designed to study the effects of initial load radius variations of nickel-clad titanium wire arrays will be presented. The load radius was varied from 17.5 mm to 25 mm and titanium K-shell (4.8 keV) yields of greater than 100 kJ were measured. The inclusion of the nickel cladding on the titanium wires allows for higher wire number loads and increases the spectral broadness of the source; kilovolt emissions (nickel plus titanium L-shell) of 400 kJ were measured in these experiments. Comparisons of the data to calculations will be made to estimate pinched plasma parameters such as temperature and participating mass fraction. These results will also be compared with previous pure titanium wire array results

  16. On the use of titanium hydride for powder injection moulding of titanium-based alloys

    International Nuclear Information System (INIS)

    Carrenoo-Morelli, E.; Bidaux, J.-E.

    2009-01-01

    Full text: Titanium and titanium-based alloys are excellent materials for a number of engineering applications because of their high strength, lightweight, good corrosion resistance, non magnetic characteristic and biocompatibility. The current processing steps are usually costly, and there is a growing demand for net-shape solutions for manufacturing parts of increasing complexity. Powder injection moulding is becoming a competitive alternative, thanks to the advances in production of good quality base-powders, binders and sintering facilities. Titanium hydride powders, have the attractiveness of being less reactive than fine titanium powders, easier to handle, and cheaper. This paper summarizes recent advances on PIM of titanium and titanium alloys from TiH2 powders, including shape-memory NiTi alloys. (author)

  17. Direct large-scale synthesis of perovskite barium strontium titanate nano-particles from solutions

    International Nuclear Information System (INIS)

    Qi Jianquan; Wang Yu; Wan Pingchen; Long Tuli; Chan, Helen Lai Wah

    2005-01-01

    This paper reports a wet chemical synthesis technique for large-scale fabrication of perovskite barium strontium titanate nano-particles near room temperature and under ambient pressure. The process employs titanium alkoxide and alkali earth hydroxides as starting materials and involves very simple operation steps. Particle size and crystallinity of the particles are controllable by changing the processing parameters. Observations by X-ray diffraction, scanning electron microscopy and transmission electron microscopy TEM indicate that the particles are well-crystallized, chemically stoichiometric and ∼50nm in diameter. The nanoparticles can be sintered into ceramics at 1150 deg. C and show typical ferroelectric hysteresis loops

  18. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  19. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  20. Titanium. Properties, raw datum surface, physicochemical basis and fabrication technique

    International Nuclear Information System (INIS)

    Garmata, V.A.; Petrun'ko, A.N.; Galitskij, N.V.; Olesov, Yu.G.; Sandler, R.A.

    1983-01-01

    On the nowadays science and technology achievements the complex of titanium metallurgy problems comprising raw material base, physico-chemical basis and fabrication technique, properties and titanium usage fields is considered for the first time. A particular attention is given to raw material base, manufacturing titanium concentrates and titanium tetrachloride, metallothermal reduction, improvement of metal quality. Data on titanium properties are given, processes of titanium powder metallurgy, scrap and waste processing, problems of economics and complex raw material use are considered

  1. Uranium fluorides analysis. Titanium spectrophotometric determination

    International Nuclear Information System (INIS)

    Anon.

    Titanium determination in uranium hexafluoride in the range 0.7 to 100 microgrammes after transformation of uranium fluoride in sulfate. Titanium is separated by extraction with N-benzoylphenylhydroxylamine, reextracted by hydrochloric-hydrofluoric acid. The complex titanium-N-benzoylphenylhydroxylamine is extracted by chloroform. Spectrophotometric determination at 400 nm [fr

  2. Titanium Powder Sintering in a Graphite Furnace and Mechanical Properties of Sintered Parts

    Directory of Open Access Journals (Sweden)

    Changzhou Yu

    2017-02-01

    Full Text Available Recent accreditation of titanium powder products for commercial aircraft applications marks a milestone in titanium powder metallurgy. Currently, powder metallurgical titanium production primarily relies on vacuum sintering. This work reported on the feasibility of powder sintering in a non-vacuum furnace and the tensile properties of the as-sintered Ti. Specifically, we investigated atmospheric sintering of commercially pure (C.P. titanium in a graphite furnace backfilled with argon and studied the effects of common contaminants (C, O, N on sintering densification of titanium. It is found that on the surface of the as-sintered titanium, a severely contaminated porous scale was formed and identified as titanium oxycarbonitride. Despite the porous surface, the sintered density in the sample interiors increased with increasing sintering temperature and holding time. Tensile specimens cut from different positions within a large sintered cylinder reveal different tensile properties, strongly dependent on the impurity level mainly carbon and oxygen. Depending on where the specimen is taken from the sintered compact, ultimate tensile strength varied from 300 to 580 MPa. An average tensile elongation of 5% to 7% was observed. Largely depending on the interstitial contents, the fracture modes from typical brittle intergranular fracture to typical ductile fracture.

  3. Room temperature synthesis of protonated layered titanate sheets using peroxo titanium carbonate complex solution.

    Science.gov (United States)

    Sutradhar, Narottam; Sinhamahapatra, Apurba; Pahari, Sandip Kumar; Bajaj, Hari C; Panda, Asit Baran

    2011-07-21

    We report the synthesis of peroxo titanium carbonate complex solution as a novel water-soluble precursor for the direct synthesis of layered protonated titanate at room temperature. The synthesized titanates showed excellent removal capacity for Pb(2+) and methylene blue. Based on experimental observations, a probable mechanism for the formation of protonated layered dititanate sheets is also discussed.

  4. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  5. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  6. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  7. Near-Net Shape Fabrication Using Low-Cost Titanium Alloy Powders

    Energy Technology Data Exchange (ETDEWEB)

    Dr. David M. Bowden; Dr. William H. Peter

    2012-03-31

    The use of titanium in commercial aircraft production has risen steadily over the last half century. The aerospace industry currently accounts for 58% of the domestic titanium market. The Kroll process, which has been used for over 50 years to produce titanium metal from its mineral form, consumes large quantities of energy. And, methods used to convert the titanium sponge output of the Kroll process into useful mill products also require significant energy resources. These traditional approaches result in product forms that are very expensive, have long lead times of up to a year or more, and require costly operations to fabricate finished parts. Given the increasing role of titanium in commercial aircraft, new titanium technologies are needed to create a more sustainable manufacturing strategy that consumes less energy, requires less material, and significantly reduces material and fabrication costs. A number of emerging processes are under development which could lead to a breakthrough in extraction technology. Several of these processes produce titanium alloy powder as a product. The availability of low-cost titanium powders may in turn enable a more efficient approach to the manufacture of titanium components using powder metallurgical processing. The objective of this project was to define energy-efficient strategies for manufacturing large-scale titanium structures using these low-cost powders as the starting material. Strategies include approaches to powder consolidation to achieve fully dense mill products, and joining technologies such as friction and laser welding to combine those mill products into near net shape (NNS) preforms for machining. The near net shape approach reduces material and machining requirements providing for improved affordability of titanium structures. Energy and cost modeling was used to define those approaches that offer the largest energy savings together with the economic benefits needed to drive implementation. Technical

  8. A large-scale RF-based Indoor Localization System Using Low-complexity Gaussian filter and improved Bayesian inference

    Directory of Open Access Journals (Sweden)

    L. Xiao

    2013-04-01

    Full Text Available The growing convergence among mobile computing device and smart sensors boosts the development of ubiquitous computing and smart spaces, where localization is an essential part to realize the big vision. The general localization methods based on GPS and cellular techniques are not suitable for tracking numerous small size and limited power objects in the indoor case. In this paper, we propose and demonstrate a new localization method, this method is an easy-setup and cost-effective indoor localization system based on off-the-shelf active RFID technology. Our system is not only compatible with the future smart spaces and ubiquitous computing systems, but also suitable for large-scale indoor localization. The use of low-complexity Gaussian Filter (GF, Wheel Graph Model (WGM and Probabilistic Localization Algorithm (PLA make the proposed algorithm robust and suitable for large-scale indoor positioning from uncertainty, self-adjective to varying indoor environment. Using MATLAB simulation, we study the system performances, especially the dependence on a number of system and environment parameters, and their statistical properties. The simulation results prove that our proposed system is an accurate and cost-effective candidate for indoor localization.

  9. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  10. Synthesis and studies of novel high metal content organic aerogels obtained from a polymerizable titanium complex

    International Nuclear Information System (INIS)

    Cadra, S.

    2010-01-01

    Inertial Confinement Fusion (ICF) is a technique widely studied by the French atomic commission (CEA). Experiments will be performed within the Laser Megajoule (LMJ). They require innovative materials like organic aerogels that constitute laser targets. Such polymeric material must provide both a high porosity and a significant titanium percentage (1 atom %). Moreover, the monomers developed must be compatible with the synthesis procedure already in use. According to these specifications, a new polymerizable titanium complex was synthesized and fully characterized. This air and moisture-stable monomer provides a high metal percentage. Its free-radical cross-linked copolymerization affords several titanium-containing polymers. These gels were dried under supercritical conditions and organic aerogels were obtained. The chemical compositions of these materials were investigated by NMR, IR and elemental analysis while their structure was characterized by MEB-EDS, MET, N 2 adsorption/desorption isotherms measurements and SAXS. The data collected fit the specification requirements. Moreover, the mechanisms responsible of the foam nano-structure formation were discussed. (author) [fr

  11. Vanadium and titanium determination by resorcinalhydrazide of salicylic acid

    Energy Technology Data Exchange (ETDEWEB)

    Karpova, O I; Pilipenko, A T; Lukachina, V V [AN Ukrainskoj SSR, Kiev. Inst. Kolloidnoj Khimii i Khimii Vody

    1979-02-01

    The complexing of titanium and vanadium with resorcinalhydrazyl of salicylic acid (RHSA) in water-organic media is studied. Titanium (4) forms a complex at pH 0.8-1.8, vanadium - at pH 2.5-5.6, and at pH 7.6-9.8. The complexes are well extracted by polar and nonpolar solvents from acid solutions. The techniques are developed for the determination of titanium and vanadium by the RHSA agent in nickel alloys.

  12. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  13. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  14. Differential cytotoxicity induced by the Titanium(IV)Salan complex Tc52 in G2-phase independent of DNA damage

    International Nuclear Information System (INIS)

    Pesch, Theresa; Schuhwerk, Harald; Wyrsch, Philippe; Immel, Timo; Dirks, Wilhelm; Bürkle, Alexander; Huhn, Thomas; Beneke, Sascha

    2016-01-01

    Chemotherapy is one of the major treatment modalities for cancer. Metal-based compounds such as derivatives of cisplatin are in the front line of therapy against a subset of cancers, but their use is restricted by severe side-effects and the induction of resistance in treated tumors. Subsequent research focused on development of cytotoxic metal-complexes without cross-resistance to cisplatin and reduced side-effects. This led to the discovery of first-generation titanium(IV)salan complexes, which reached clinical trials but lacked efficacy. New-generation titanium (IV)salan-complexes show promising anti-tumor activity in mice, but their molecular mechanism of cytotoxicity is completely unknown. Four different human cell lines were analyzed in their responses to a toxic (Tc52) and a structurally highly related but non-toxic (Tc53) titanium(IV)salan complex. Viability assays were used to reveal a suitable treatment range, flow-cytometry analysis was performed to monitor the impact of dosage and treatment time on cell-cycle distribution and cell death. Potential DNA strand break induction and crosslinking was investigated by immunostaining of damage markers as well as automated fluorometric analysis of DNA unwinding. Changes in nuclear morphology were analyzed by DAPI staining. Acidic beta-galactosidase activity together with morphological changes was monitored to detect cellular senescence. Western blotting was used to analyze induction of pro-apoptotic markers such as activated caspase7 and cleavage of PARP1, and general stress kinase p38. Here we show that the titanium(IV)salan Tc52 is effective in inducing cell death in the lower micromolar range. Surprisingly, Tc52 does not target DNA contrary to expectations deduced from the reported activity of other titanium complexes. Instead, Tc52 application interferes with progression from G2-phase into mitosis and induces apoptotic cell death in tested tumor cells. Contrarily, human fibroblasts undergo senescence in a

  15. Manufacture of a four-sheet complex component from different titanium alloys by superplastic forming

    Science.gov (United States)

    Allazadeh, M. R.; Zuelli, N.

    2017-10-01

    A superplastic forming (SPF) technology process was deployed to form a complex component with eight-pocket from a four-sheet sandwich panel sheetstock. Six sheetstock packs were composed of two core sheets made of Ti-6Al-4V or Ti-5Al-4Cr-4Mo-2Sn-2Zr titanium alloy and two skin sheets made of Ti-6Al-4V or Ti-6Al-2Sn-4Zr-2Mo titanium alloy in three different combinations. The sheets were welded with two subsequent welding patterns over the core and skin sheets to meet the required component's details. The applied welding methods were intermittent and continuous resistance seam welding for bonding the core sheets to each other and the skin sheets over the core panel, respectively. The final component configuration was predicted based on the die drawings and finite element method (FEM) simulations for the sandwich panels. An SPF system set-up with two inlet gas pipe feeding facilitated the trials to deliver two pressure-time load cycles acting simultaneously which were extracted from FEM analysis for specific forming temperature and strain rate. The SPF pressure-time cycles were optimized via GOM scanning and visually inspecting some sections of the packs in order to assess the levels of core panel formation during the inflation process of the sheetstock. Two sets of GOM scan results were compared via GOM software to inspect the surface and internal features of the inflated multisheet packs. The results highlighted the capability of the tested SPF process to form complex components from a flat multisheet pack made of different titanium alloys.

  16. Large-Eddy Simulations of Flows in Complex Terrain

    Science.gov (United States)

    Kosovic, B.; Lundquist, K. A.

    2011-12-01

    Large-eddy simulation as a methodology for numerical simulation of turbulent flows was first developed to study turbulent flows in atmospheric by Lilly (1967). The first LES were carried by Deardorff (1970) who used these simulations to study atmospheric boundary layers. Ever since, LES has been extensively used to study canonical atmospheric boundary layers, in most cases flat plate boundary layers under the assumption of horizontal homogeneity. Carefully designed LES of canonical convective and neutrally stratified and more recently stably stratified atmospheric boundary layers have contributed significantly to development of better understanding of these flows and their parameterizations in large scale models. These simulations were often carried out using codes specifically designed and developed for large-eddy simulations of horizontally homogeneous flows with periodic lateral boundary conditions. Recent developments in multi-scale numerical simulations of atmospheric flows enable numerical weather prediction (NWP) codes such as ARPS (Chow and Street, 2009), COAMPS (Golaz et al., 2009) and Weather Research and Forecasting model, to be used nearly seamlessly across a wide range of atmospheric scales from synoptic down to turbulent scales in atmospheric boundary layers. Before we can with confidence carry out multi-scale simulations of atmospheric flows, NWP codes must be validated for accurate performance in simulating flows over complex or inhomogeneous terrain. We therefore carry out validation of WRF-LES for simulations of flows over complex terrain using data from Askervein Hill (Taylor and Teunissen, 1985, 1987) and METCRAX (Whiteman et al., 2008) field experiments. WRF's nesting capability is employed with a one-way nested inner domain that includes complex terrain representation while the coarser outer nest is used to spin up fully developed atmospheric boundary layer turbulence and thus represent accurately inflow to the inner domain. LES of a

  17. Corrosion resistant properties and weldabilities of ASTM Grade 12 titanium alloy

    International Nuclear Information System (INIS)

    Tsumori, Yoshikatsu; Itoh, Hideo

    1988-01-01

    Plates, sheets, bars, wires and thinner seam-welded tubings were manufactured from large-scaled ingot of ASTM Grade 12 alloy (Ti-0.8Ni-0.3Mo). The processability of G-12 alloy has proved almost similar to that of conventional commercially pure titanium grades. It has been clarified that the G-12 alloy showed several advantageous features: Chlorides-Crevice corrosion resistance of the alloy was almost equals to G-7 and Pd0/TiO 2 coated titanium, and the maximum allowable stress was able to be designed higher than that of commercially pure titanium. This alloy has been in applications also offers where such environments as seawater, brines and moist chlorine, various oil refinery and chemical industries, and others. (author)

  18. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  19. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  20. Fuel pin integrity assessment under large scale transients

    International Nuclear Information System (INIS)

    Dutta, B.K.

    2006-01-01

    The integrity of fuel rods under normal, abnormal and accident conditions is an important consideration during fuel design of advanced nuclear reactors. The fuel matrix and the sheath form the first barrier to prevent the release of radioactive materials into the primary coolant. An understanding of the fuel and clad behaviour under different reactor conditions, particularly under the beyond-design-basis accident scenario leading to large scale transients, is always desirable to assess the inherent safety margins in fuel pin design and to plan for the mitigation the consequences of accidents, if any. The severe accident conditions are typically characterized by the energy deposition rates far exceeding the heat removal capability of the reactor coolant system. This may lead to the clad failure due to fission gas pressure at high temperature, large- scale pellet-clad interaction and clad melting. The fuel rod performance is affected by many interdependent complex phenomena involving extremely complex material behaviour. The versatile experimental database available in this area has led to the development of powerful analytical tools to characterize fuel under extreme scenarios

  1. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    Science.gov (United States)

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  2. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  3. Production of titanium tetrachloride

    International Nuclear Information System (INIS)

    Perillo, P.M.; Botbol, O.

    1990-01-01

    This report presents a summary of results from theoperation of a laboratory scale for the production in batches of approximately 100 gs of titanium tetrachloride by chlorination with chloroform and carbon tetrachloride between 340 deg C and 540 deg C. Chlorination agent vapors were passed through a quartz column reacting with titanium oxide powder agglomerated in little spheres. Obtained titanium tetrachloride was condensed in a condenser, taken in a ballon and then purified by fractional distillation. Optimun temperature for chloroform was 400 deg C with 74 % yield and for carbon tetrachloride was 500 deg C with 69 % yield. (Author) [es

  4. Titanium fasteners. [for aircraft industry

    Science.gov (United States)

    Phillips, J. L.

    1972-01-01

    Titanium fasteners are used in large quantities throughout the aircraft industry. Most of this usage is in aluminum structure; where titanium structure exists, titanium fasteners are logically used as well. Titanium fasteners offer potential weight savings to the designer at a cost of approximately $30 per pound of weight saved. Proper and least cost usage must take into consideration type of fastener per application, galvanic couples and installation characteristics of protective coatings, cosmetic appearance, paint adhesion, installation forces and methods available and fatigue performance required.

  5. Utilization of titanium sponge in H. T. G. R

    Energy Technology Data Exchange (ETDEWEB)

    Tone, H [Japan Atomic Energy Research Inst., Oarai, Ibaraki. Oarai Research Establishment

    1977-10-01

    The high temperature, gas-cooled reactor (H.T.G.R.) uses helium as a coolant and graphite as both the moderator and the fuel tube material. At first sight, there should not be any problem concerning the compatibility of these materials in the H.T.G.R. core region where temperature exceeds 700/sup 0/C, however, it is possible that the graphite core and other structural materials are oxidized by traces of impurities in the coolant. In large-power H.T.G.R., water inleakage from both heat exchangers and coolant circulation pumps will probably be the major source of impurity which will react with the graphite-producing H/sub 2/, CO and CO/sub 2/. In the near future, the nuclear heat of H.T.G.R. will be used as a major heat source for steel production and the chemical industry. For these purposes, it will be necessary to construct a reactor using a helium coolant of greater than 1000/sup 0/C. Therefore, not only the development of refractory metals as structural materials but also an effective helium coolant purification system are the keys for H.T.G.R. construction. Recently, in the helium coolant purification system of H.T.G. Reactors, which have been developed in the several nations advanced in atomic reactors, titanium sponge is used very frequently to remove hydrogen gas as an impurity in helium coolant. Titanium sponge can absorb very large quantities of hydrogen and its absorption-capacity can be very easily controlled by controlling the temperature of the titanium sponge-since titanium hydride is formed by endothermic reaction. The titanium sponge trap is used also in OGL-1 (Oarai Gas Loop-1), helium coolant purification system for large scale irradiation apparatus which is used for nuclear fuels of H.T.G.R. This apparatus has been installed in the Japan Material Testing Reactor. In this report, the coolant purification system of H.T.G.R., OGL-1 and the experimental results of the titanium sponge trap are explained briefly.

  6. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  7. Stepwise Ti-Cl, Ti-CH3, and Ti-C6H5 bond dissociation enthalpies in bis(pentamethylcyclopentadienyl)titanium complexes

    NARCIS (Netherlands)

    Dias, Alberto R.; Salema, Margarida S.; Martinho Simões, Jose A.; Pattiasina, Johannes W.; Teuben, Jan H.

    1988-01-01

    Reaction-solution calorimetric studies involving the complexes Ti[η5-C5(CH3)5]2(CH3)2, Ti[η5-C5(CH3)5]2(CH3), Ti[η5-C5(CH3)5]2(C6H5), Ti[η5-C5(CH3)5]2Cl2, and Ti[η5-C5(CH3)5]2Cl, have enabled derivation of titanium-carbon and titanium-chlorine stepwise bond dissociation enthalpies in these species.

  8. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  9. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  10. Determination of quercetin using a photo-electrochemical sensor modified with titanium dioxide and a platinum(II)-porphyrin complex

    International Nuclear Information System (INIS)

    Tian, Li; Wang, Binbin; Chen, Ruizhan; Gao, Ye; Chen, Yanling; Li, Tianjiao

    2015-01-01

    A glassy carbon electrode (GCE) was modified with a film containing titanium dioxide and a Pt(II)-porphyrin complex, and its response to quercetin was investigated employing cyclic voltammetry and chronoamperometry. The oxidation current caused by quercetin is largely enhanced under UV illumination. The effects of pH value, mass of TiO 2 in the film, UV illumination time and applied potential were studied. Under optimized conditions, the peak current at a typically applied voltage of +0.4 V depends linearly on the concentration of quercetin in the 0.002 to 50 mg L −1 range. The detection limit (at an SNR of 3) is 0.8 μg L −1 . The method was successfully applied to the determination of quercetin in (spiked) samples of tea and apple juice. (author)

  11. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  12. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  13. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  14. Assessment of climate change impacts on rainfall using large scale ...

    Indian Academy of Sciences (India)

    Many of the applied techniques in water resources management can be directly or indirectly influenced by ... is based on large scale climate signals data around the world. In order ... predictand relationships are often very complex. .... constraints to solve the optimization problem. ..... social, and environmental sustainability.

  15. Thermal convection of liquid metal in the titanium reduction reactor

    Science.gov (United States)

    Teimurazov, A.; Frick, P.; Stefani, F.

    2017-06-01

    The structure of the convective flow of molten magnesium in a metallothermic titanium reduction reactor has been studied numerically in a three-dimensional non-stationary formulation with conjugated heat transfer between liquid magnesium and solids (steel walls of the cavity and titanium block). A nonuniform computational mesh with a total of 3.7 million grid points was used. The Large Eddy Simulation technique was applied to take into account the turbulence in the liquid phase. The instantaneous and average characteristics of the process and the velocity and temperature pulsation fields are analyzed. The simulations have been performed for three specific heating regimes: with furnace heaters operating at full power, with furnace heaters switched on at the bottom of the vessel only, and with switched-off furnace heaters. It is shown that the localization of the cooling zone can completely reorganize the structure of the large-scale flow. Therefore, by changing heating regimes, it is possible to influence the flow structure for the purpose of creating the most favorable conditions for the reaction. It is also shown that the presence of the titanium block strongly affects the flow structure.

  16. MacroBac: New Technologies for Robust and Efficient Large-Scale Production of Recombinant Multiprotein Complexes.

    Science.gov (United States)

    Gradia, Scott D; Ishida, Justin P; Tsai, Miaw-Sheue; Jeans, Chris; Tainer, John A; Fuss, Jill O

    2017-01-01

    Recombinant expression of large, multiprotein complexes is essential and often rate limiting for determining structural, biophysical, and biochemical properties of DNA repair, replication, transcription, and other key cellular processes. Baculovirus-infected insect cell expression systems are especially well suited for producing large, human proteins recombinantly, and multigene baculovirus systems have facilitated studies of multiprotein complexes. In this chapter, we describe a multigene baculovirus system called MacroBac that uses a Biobricks-type assembly method based on restriction and ligation (Series 11) or ligation-independent cloning (Series 438). MacroBac cloning and assembly is efficient and equally well suited for either single subcloning reactions or high-throughput cloning using 96-well plates and liquid handling robotics. MacroBac vectors are polypromoter with each gene flanked by a strong polyhedrin promoter and an SV40 poly(A) termination signal that minimize gene order expression level effects seen in many polycistronic assemblies. Large assemblies are robustly achievable, and we have successfully assembled as many as 10 genes into a single MacroBac vector. Importantly, we have observed significant increases in expression levels and quality of large, multiprotein complexes using a single, multigene, polypromoter virus rather than coinfection with multiple, single-gene viruses. Given the importance of characterizing functional complexes, we believe that MacroBac provides a critical enabling technology that may change the way that structural, biophysical, and biochemical research is done. © 2017 Elsevier Inc. All rights reserved.

  17. Reliability of large and complex systems

    CERN Document Server

    Kolowrocki, Krzysztof

    2014-01-01

    Reliability of Large and Complex Systems, previously titled Reliability of Large Systems, is an innovative guide to the current state and reliability of large and complex systems. In addition to revised and updated content on the complexity and safety of large and complex mechanisms, this new edition looks at the reliability of nanosystems, a key research topic in nanotechnology science. The author discusses the importance of safety investigation of critical infrastructures that have aged or have been exposed to varying operational conditions. This reference provides an asympt

  18. Titanium Insertion into CO Bonds in Anionic Ti-CO2 Complexes.

    Science.gov (United States)

    Dodson, Leah G; Thompson, Michael C; Weber, J Mathias

    2018-03-22

    We explore the structures of [Ti(CO 2 ) y ] - cluster anions using infrared photodissociation spectroscopy and quantum chemistry calculations. The existence of spectral signatures of metal carbonyl CO stretching modes shows that insertion of titanium atoms into C-O bonds represents an important reaction during the formation of these clusters. In addition to carbonyl groups, the infrared spectra show that the titanium center is coordinated to oxalato, carbonato, and oxo ligands, which form along with the metal carbonyls. The presence of a metal oxalato ligand promotes C-O bond insertion in these systems. These results highlight the affinity of titanium for C-O bond insertion processes.

  19. The use of titanium dioxide for selective enrichment of phosphorylated peptides

    DEFF Research Database (Denmark)

    Thingholm, Tine E.; Larsen, Martin R.

    2016-01-01

    acid (DHB), phthalic acid, lactic acid, or glycolic acid has been shown to improve selectivity significantly by reducing unspecific binding of non-phosphorylated peptides. The phosphopeptides bound to the TiO2 are subsequently eluted from the chromatographic material using an alkaline buffer. TiO2......Titanium dioxide (TiO2) has very high affinity for phosphopeptides and in recent years it has become one of the most popular methods for phosphopeptide enrichment from complex biological samples. Peptide loading onto TiO2 resin in a highly acidic environment in the presence of 2,5-dihydroxybenzoic...... chromatography is extremely tolerant towards most buffers used in biological experiments, highly robust and as such it has become the method of choice in large-scale phosphoproteomics. Here we describe a batch mode protocol for phosphopeptide enrichment using TiO2 chromatographic material followed by desalting...

  20. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  1. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  2. Qualification testing and full-scale demonstration of titanium-treated zeolite for sludge wash processing

    Energy Technology Data Exchange (ETDEWEB)

    Dalton, W.J.

    1997-06-30

    Titanium-treated zeolite is a new ion-exchange material that is a variation of UOP (formerly Union Carbide) IONSIV IE-96 zeolite (IE-96) that has been treated with an aqueous titanium solution in a proprietary process. IE-96 zeolite, without the titanium treatment, has been used since 1988 in the West Valley Demonstration Project`s (WVDP) Supernatant Treatment System (STS) ion-exchange columns to remove Cs-137 from the liquid supernatant solution. The titanium-treated zeolite (TIE-96) was developed by Battelle-Pacific Northwest Laboratory (PNL). Following successful lab-scale testing of the PNL-prepared TIE-96, UOP was selected as a commercial supplier of the TIE-96 zeolite. Extensive laboratory tests conducted by both the WVDP and PNL indicate that the TIE-96 will successfully remove comparable quantities of Cs-137 from Tank 8D-2 high-level radioactive liquid as was done previously with IE-96. In addition to removing Cs-137, TIE-96 also removes trace quantities of Pu, as well as Sr-90, from the liquid being processed over a wide range of operating conditions: temperature, pH, and dilution. The exact mechanism responsible for the Pu removal is not fully understood. However, the Pu that is removed by the TIE-96 remains on the ion-exchange column under anticipated sludge wash processing conditions. From May 1988 to November 1990, the WVDP processed 560,000 gallons of liquid high-level radioactive supernatant waste stored in Tank 8D-2. Supernatant is an aqueous salt solution comprised primarily of soluble sodium salts. The second stage of the high-level waste treatment process began November 1991 with the initiation of sludge washing. Sludge washing involves the mixing of Tank 8D-2 contents, both sludge and liquid, to dissolve the sulfate salts present in the sludge. Two sludge washes were required to remove sulfates from the sludge.

  3. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  4. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  5. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  6. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  7. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  8. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  9. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    Science.gov (United States)

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  10. Laser induced single spot oxidation of titanium

    Energy Technology Data Exchange (ETDEWEB)

    Jwad, Tahseen, E-mail: taj355@bham.ac.uk; Deng, Sunan; Butt, Haider; Dimov, S.

    2016-11-30

    Highlights: • A new high resolution laser induced oxidation (colouring) method is proposed (single spot oxidation). • The method is applied to control oxide films thicknesses and hence colours on titanium substrates in micro-scale. • The method enable imprinting high resolution coloured image on Ti substrate. • Optical and morphological periodic surface structures are also produced by an array of oxide spots using the proposed method. • Colour coding of two colours into one field is presented. - Abstract: Titanium oxides have a wide range of applications in industry, and they can be formed on pure titanium using different methods. Laser-induced oxidation is one of the most reliable methods due to its controllability and selectivity. Colour marking is one of the main applications of the oxidation process. However, the colourizing process based on laser scanning strategies is limited by the relative large processing area in comparison to the beam size. Single spot oxidation of titanium substrates is proposed in this research in order to increase the resolution of the processed area and also to address the requirements of potential new applications. The method is applied to produce oxide films with different thicknesses and hence colours on titanium substrates. High resolution colour image is imprinted on a sheet of pure titanium by converting its pixels’ colours into laser parameter settings. Optical and morphological periodic surface structures are also produced by an array of oxide spots and then analysed. Two colours have been coded into one field and the dependencies of the reflected colours on incident and azimuthal angles of the light are discussed. The findings are of interest to a range of application areas, as they can be used to imprint optical devices such as diffusers and Fresnel lenses on metallic surfaces as well as for colour marking.

  11. Laser induced single spot oxidation of titanium

    International Nuclear Information System (INIS)

    Jwad, Tahseen; Deng, Sunan; Butt, Haider; Dimov, S.

    2016-01-01

    Highlights: • A new high resolution laser induced oxidation (colouring) method is proposed (single spot oxidation). • The method is applied to control oxide films thicknesses and hence colours on titanium substrates in micro-scale. • The method enable imprinting high resolution coloured image on Ti substrate. • Optical and morphological periodic surface structures are also produced by an array of oxide spots using the proposed method. • Colour coding of two colours into one field is presented. - Abstract: Titanium oxides have a wide range of applications in industry, and they can be formed on pure titanium using different methods. Laser-induced oxidation is one of the most reliable methods due to its controllability and selectivity. Colour marking is one of the main applications of the oxidation process. However, the colourizing process based on laser scanning strategies is limited by the relative large processing area in comparison to the beam size. Single spot oxidation of titanium substrates is proposed in this research in order to increase the resolution of the processed area and also to address the requirements of potential new applications. The method is applied to produce oxide films with different thicknesses and hence colours on titanium substrates. High resolution colour image is imprinted on a sheet of pure titanium by converting its pixels’ colours into laser parameter settings. Optical and morphological periodic surface structures are also produced by an array of oxide spots and then analysed. Two colours have been coded into one field and the dependencies of the reflected colours on incident and azimuthal angles of the light are discussed. The findings are of interest to a range of application areas, as they can be used to imprint optical devices such as diffusers and Fresnel lenses on metallic surfaces as well as for colour marking.

  12. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  13. Effect of nanometer scale surface roughness of titanium for osteoblast function

    Directory of Open Access Journals (Sweden)

    Satoshi Migita

    2017-02-01

    Full Text Available Surface roughness is an important property for metallic materials used in medical implants or other devices. The present study investigated the effects of surface roughness on cellular function, namely cell attachment, proliferation, and differentiation potential. Titanium (Ti discs, with a hundred nanometer- or nanometer-scale surface roughness (rough and smooth Ti surface, respectively were prepared by polishing with silicon carbide paper. MC3T3-E1 mouse osteoblast-like cells were cultured on the discs, and their attachment, spreading area, proliferation, and calcification were analyzed. Cells cultured on rough Ti discs showed reduced attachment, proliferation, and calcification ability suggesting that the surface inhibited osteoblast function. The findings can provide a basis for improving the biocompatibility of medical devices.

  14. Decoupling local mechanics from large-scale structure in modular metamaterials

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  15. Titanium coordination compounds: from discrete metal complexes to metal–organic frameworks

    KAUST Repository

    Assi, Hala

    2017-05-24

    Owing to their promise in photocatalysis and optoelectronics, titanium based metal–organic frameworks (MOFs) are one of the most appealing classes of MOFs reported to date. Nevertheless, Ti-MOFs are still very scarce because of their challenging synthesis associated with a poor degree of control of their chemistry and crystallization. This review aims at giving an overview of the recent progress in this field focusing on the most relevant existing titanium coordination compounds as well as their promising photoredox properties. Not only Ti-MOFs but also Ti-oxo-clusters will be discussed and particular interest will be dedicated to highlight the different successful synthetic strategies allowing to overcome the still “unpredictable” reactivity of titanium ions, particularly to afford crystalline porous coordination polymers.

  16. Large circular dichroism and optical rotation in titanium doped chiral silver nanorods

    Energy Technology Data Exchange (ETDEWEB)

    Titus, Jitto; Perera, A.G. Unil [Department of Physics and Astronomy, Optoelectronics Laboratory, GSU, Atlanta, GA (United States); Larsen, George; Zhao, Yiping [Department of Physics and Astronomy, Nanolab, UGA, Athens, GA (United States)

    2016-10-15

    The circular dichroism of titanium-doped silver chiral nanorod arrays grown using the glancing angle deposition (GLAD) method is investigated in the visible and near infrared ranges using transmission ellipsometry and spectroscopy. These films are found to have significant circular polarization effects across broad ranges of the visible to NIR spectrum, including large values for optical rotation. The characteristics of these circular polarization effects are strongly influenced by the morphology of the deposited arrays. Thus, the morphological control of the optical activity in these nanostructures demonstrates significant optimization capability of the GLAD technique for fabricating chiral plasmonic materials. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Large scale hydrogeological modelling of a low-lying complex coastal aquifer system

    DEFF Research Database (Denmark)

    Meyer, Rena

    2018-01-01

    intrusion. In this thesis a new methodological approach was developed to combine 3D numerical groundwater modelling with a detailed geological description and hydrological, geochemical and geophysical data. It was applied to a regional scale saltwater intrusion in order to analyse and quantify...... the groundwater flow dynamics, identify the driving mechanisms that formed the saltwater intrusion to its present extent and to predict its progression in the future. The study area is located in the transboundary region between Southern Denmark and Northern Germany, adjacent to the Wadden Sea. Here, a large-scale...... parametrization schemes that accommodate hydrogeological heterogeneities. Subsequently, density-dependent flow and transport modelling of multiple salt sources was successfully applied to simulate the formation of the saltwater intrusion during the last 4200 years, accounting for historic changes in the hydraulic...

  18. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    Science.gov (United States)

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  19. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  20. The characteristics of corrosion, radiation degradation and dissolution of titanium alloys

    International Nuclear Information System (INIS)

    Sung, K. W.; Na, J. W.; Choi, B. S.; Lee, D. J.; Chang, M. H.

    2001-12-01

    In order to establish the technical bases of water chemistry design requirement related titanium alloys, we investigated the characteristics of corrosion, activation, radiation degradation, radiation hydrogen embrittlement of titanium alloys and dissolution of titanium dioxide. Titanium alloys generally have high corrosion resistance. Corrosion product release from PT-7M and PT-3V titanium alloy surface for 18 months of operation is negligible, and the corrosion penetration for about 30 years is about 1 μm, while the corrosion rates is not higher than one third of that of austenitic steel. Titanium only converts into Sc-46 with 85 day halflife after neutron irradiation, and its radioactivity is not higher than one thousandth of that produced from nickel. Therefore, under the condition without any neutron irradiation, the radiation damage of titanium alloys would have no problem. Titanium dioxide, that protects the metals from the corrosion, has retrograde solubility in neutral solutions. It does not form any complexes with ligands such as ammonia, but Ti(IV) gets more stable by complexing with water molecules. In conclusion, it is estimated that titanium alloys such as PT-7M would be applicable to steam generator materials

  1. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  2. Synthesis and characterization of nanostructured titanium carbide for fuel cell applications

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Paviter; Singh, Harwinder; Singh, Bikramjeet; Kaur, Manpreet; Kaur, Gurpreet; Kumar, Akshay, E-mail: akshaykumar.tiet@gmail.com [Advanced Functional Material Laboratory, Department of Nanotechnology,, Sri Guru Granth Sahib World University, Fatehgarh Sahib-140 406 Punjab (India); Kumar, Manjeet [Department of Materials Engineering, Defense Institute of Advanced Technology (DU), Pune-411 025 (India); Bala, Rajni [Department of Mathematics Punjabi University Patiala-147 002 Punjab (India)

    2016-04-13

    Titanium carbide (TiC) nanoparticles have been successfully synthesized by carbo-thermic reaction of titanium and acetone at 800 °C. This method is relatively low temperature synthesis route. It can be used for large scale production of TiC. The synthesized nanoparticles have been characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and differential thermal analyzer (DTA) techniques. XRD analysis confirmed the formation of single phase TiC. XRD analysis confirmed that the particles are spherical in shape with an average particle size of 13 nm. DTA analysis shows that the phase is stable upto 900 °C and the material can be used for high temperature applications.

  3. The genetic etiology of Tourette Syndrome: Large-scale collaborative efforts on the precipice of discovery

    Directory of Open Access Journals (Sweden)

    Marianthi Georgitsi

    2016-08-01

    Full Text Available Gilles de la Tourette Syndrome (TS is a childhood-onset neurodevelopmental disorder that is characterized by multiple motor and phonic tics. It has a complex etiology with multiple genes likely interacting with environmental factors to lead to the onset of symptoms. The genetic basis of the disorder remains elusive;however, multiple resources and large-scale projects are coming together, launching a new era in the field and bringing us on the verge of discovery. The large-scale efforts outlined in this report, are complementary and represent a range of different approaches to the study of disorders with complex inheritance. The Tourette Syndrome Association International Consortium for Genetics (TSAICG has focused on large families, parent-proband trios and cases for large case-control designs such as genomewide association studies (GWAS, copy number variation (CNV scans and exome/genome sequencing. TIC Genetics targets rare, large effect size mutations in simplex trios and multigenerational families. The European Multicentre Tics in Children Study (EMTICS seeks to elucidate gene-environment interactions including the involvement of infection and immune mechanisms in TS etiology. Finally, TS-EUROTRAIN, a Marie Curie Initial Training Network, aims to act as a platform to unify large-scale projects in the field and to educate the next generation of experts. Importantly, these complementary large-scale efforts are joining forces to uncover the full range of genetic variation and environmental risk factors for TS, holding great promise for indentifying definitive TS susceptibility genes and shedding light into the complex pathophysiology of this disorder.

  4. The Genetic Etiology of Tourette Syndrome: Large-Scale Collaborative Efforts on the Precipice of Discovery

    Science.gov (United States)

    Georgitsi, Marianthi; Willsey, A. Jeremy; Mathews, Carol A.; State, Matthew; Scharf, Jeremiah M.; Paschou, Peristera

    2016-01-01

    Gilles de la Tourette Syndrome (TS) is a childhood-onset neurodevelopmental disorder that is characterized by multiple motor and phonic tics. It has a complex etiology with multiple genes likely interacting with environmental factors to lead to the onset of symptoms. The genetic basis of the disorder remains elusive. However, multiple resources and large-scale projects are coming together, launching a new era in the field and bringing us on the verge of discovery. The large-scale efforts outlined in this report are complementary and represent a range of different approaches to the study of disorders with complex inheritance. The Tourette Syndrome Association International Consortium for Genetics (TSAICG) has focused on large families, parent-proband trios and cases for large case-control designs such as genomewide association studies (GWAS), copy number variation (CNV) scans, and exome/genome sequencing. TIC Genetics targets rare, large effect size mutations in simplex trios, and multigenerational families. The European Multicentre Tics in Children Study (EMTICS) seeks to elucidate gene-environment interactions including the involvement of infection and immune mechanisms in TS etiology. Finally, TS-EUROTRAIN, a Marie Curie Initial Training Network, aims to act as a platform to unify large-scale projects in the field and to educate the next generation of experts. Importantly, these complementary large-scale efforts are joining forces to uncover the full range of genetic variation and environmental risk factors for TS, holding great promise for identifying definitive TS susceptibility genes and shedding light into the complex pathophysiology of this disorder. PMID:27536211

  5. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  6. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  7. Synthesis and structure of the first fullerene complex of titanium Cp{sub 2}Ti({eta}{sup 2}-C{sub 60})

    Energy Technology Data Exchange (ETDEWEB)

    Burlakov, V.V.; Usatov, A.V.; Lyssenko, K.A.; Antipin, M.Yu.; Novikov, Yu.N.; Shur, V.B. [Russian Academy of Sciences, Moscow (Russian Federation). A.N. Nesmeyanov Inst. of Organoelement Compounds

    1999-11-01

    The first fullerene complex of titanium Cp{sub 2}Ti({eta}{sup 2}-C{sub 60}) has been synthesized by reaction of the bis(trimethylsilyl)-acetylene complex of titanocene Cp{sub 2}Ti({eta}{sup 2}-Me{sub 3}SiC{sub 2}SiMe{sub 3}) with an equimolar amount of fullerene-60 in toluene at room temperature under argon. An X-ray diffraction study of the complex has shown that it has the structure of a titanacyclopropane derivative. (orig.)

  8. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  9. Radiolysis of titanium potassium oxalate in aqueous solution. [. gamma. rays

    Energy Technology Data Exchange (ETDEWEB)

    Bundo, Y; Ono, I [Industrial Research Inst. of Kanagawa Prefecture, Yokohama (Japan); Ogawa, T

    1975-01-01

    The dissolution state of titanium potassium oxalate in aqueous solution is different according to the pH. The yellowish brown titanium complex produced by the reaction of titanium potassium oxalate and hydrogen peroxide seems to be different in its structure according to the pH. Considering these points, gamma-ray irradiation was carried out on the sample by dissolving titanium potassium oxalate in purified water under the conditions of oxygen saturation and nitrogen saturation, and the relation between irradiation dose and the production of titanium complex was determined. On the basis of the experimental result, the mechanism of forming hydrogen peroxide was presumed. The radiation source used was 2,000 Ci of /sup 60/Co. For photometric analysis, a 139 type photoelectric spectrophotometer of Hitachi Ltd. was used. From the experimental results, in neutral water, titanium potassium oxalate exists in the state that two oxalic acid ions are coordinated to titanyl ion, while in case of the pH lowered by the addition of sulfuric acid, it can exist in the state that one oxalic acid ion is coordinated to titanyl ion. The yield of hydrogen peroxide produced by irradiating titanium potassium oxalate aqueous solution with gamma-ray is the sum of the molecular product from water and the radiolysis product from titanium potassium oxalate.

  10. Nano-scale analysis of titanium dioxide fingerprint-development powders

    International Nuclear Information System (INIS)

    Reynolds, A J; Jones, B J; Sears, V; Bowman, V

    2008-01-01

    Titanium dioxide based powders are regularly used in the development of latent fingerprints on dark surfaces. For analysis of prints on adhesive tapes, the titanium dioxide is suspended in a surfactant and used in the form of a small particle reagent (SPR). Analysis of commercially available products shows varying levels of effectiveness of print development, with some powders adhering to the background as well as the print. Scanning electron microscopy (SEM) images of prints developed with different powders show a range of levels of aggregation of particles. Analytical transmission electron microscopy (TEM) of the fingerprint powder shows TiO 2 particles with a surrounding coating, tens of nanometres thick, consisting of Al and Si rich material. X ray photoelectron spectroscopy (XPS) is used to determine the composition and chemical state of the surface of the powders; with a penetration depth of approximately 10nm, this technique demonstrates differing Ti: Al: Si ratios and oxidation states between the surfaces of different powders. Levels of titanium detected with this technique demonstrate variation in the integrity of the surface coating. The thickness, integrity and composition of the Al/Si-based coating is related to the level of aggregation of TiO 2 particles and efficacy of print development.

  11. Nano-scale analysis of titanium dioxide fingerprint-development powders

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A J; Jones, B J [Experimental Techniques Centre, Brunei University, Kingston Lane, Uxbridge, Middlesex, UB8 3PH (United Kingdom); Sears, V; Bowman, V [Fingerprint and Footwear Forensics, Home Office Scientific Development Branch, Sandridge, St Albans, Hertfordshire, AL4 9HQ (United Kingdom)], E-mail: b.j.jones@physics.org

    2008-08-15

    Titanium dioxide based powders are regularly used in the development of latent fingerprints on dark surfaces. For analysis of prints on adhesive tapes, the titanium dioxide is suspended in a surfactant and used in the form of a small particle reagent (SPR). Analysis of commercially available products shows varying levels of effectiveness of print development, with some powders adhering to the background as well as the print. Scanning electron microscopy (SEM) images of prints developed with different powders show a range of levels of aggregation of particles. Analytical transmission electron microscopy (TEM) of the fingerprint powder shows TiO{sub 2} particles with a surrounding coating, tens of nanometres thick, consisting of Al and Si rich material. X ray photoelectron spectroscopy (XPS) is used to determine the composition and chemical state of the surface of the powders; with a penetration depth of approximately 10nm, this technique demonstrates differing Ti: Al: Si ratios and oxidation states between the surfaces of different powders. Levels of titanium detected with this technique demonstrate variation in the integrity of the surface coating. The thickness, integrity and composition of the Al/Si-based coating is related to the level of aggregation of TiO{sub 2} particles and efficacy of print development.

  12. Some ecological guidelines for large-scale biomass plantations

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, W.; Cook, J.H.; Beyea, J. [National Audubon Society, Tavernier, FL (United States)

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Our results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.

  13. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  14. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  15. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  16. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  17. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  18. Nearly incompressible fluids: Hydrodynamics and large scale inhomogeneity

    International Nuclear Information System (INIS)

    Hunana, P.; Zank, G. P.; Shaikh, D.

    2006-01-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as 'nearly incompressible hydrodynamics', is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term 'locally incompressible' to describe the equations. This term should be distinguished from the term 'nearly incompressible', which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly

  19. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  20. Evidence of antibacterial activity on titanium surfaces through nanotextures

    Energy Technology Data Exchange (ETDEWEB)

    Seddiki, O.; Harnagea, C. [INRS – Centre Énergie, Matériaux et Télécommunications, Boulevard Lionel-Boulet, Varennes, Québec J3X 1S2 (Canada); Levesque, L.; Mantovani, D. [Laboratory for Biomaterials and Bioengineering (CRC-I), Dept Min-Met-Materials Engineering and Research Center CHU-Quebec, Laval University, Quebec City (Canada); Rosei, F., E-mail: rosei@emt.inrs.ca [INRS – Centre Énergie, Matériaux et Télécommunications, Boulevard Lionel-Boulet, Varennes, Québec J3X 1S2 (Canada); Center for Self-Assembled Chemical Structures, McGill University, H3A 2K6 Montreal, Quebec (Canada)

    2014-07-01

    Nosocomial infections (Nis) are a major concern for public health. As more and more of the pathogens responsible for these infections are antibiotic resistant, finding new ways to overcome them is a major challenge for biomedical research. We present a method to reduce Nis spreading by hindering bacterial adhesion in its very early stage. This is achieved by reducing the contact interface area between the bacterium and the surface by nanoengineering the surface topography. In particular, we studied the Escheria Coli adhesion on titanium surfaces exhibiting different morphologies, that were obtained by a combination of mechanical polishing and chemical etching. Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM) characterization revealed that the titanium surface is modified at both micro- and nano-scale. X-ray Photoelectron Spectroscopy (XPS) revealed that the surfaces have the same composition before and after piranha treatment, consisting mainly of TiO{sub 2}. Adhesion tests showed a significant reduction in bacterial accumulation on nanostructured surfaces that had the lowest roughness over large areas. SEM images acquired after bacterial culture on different titanium substrates confirmed that the polished titanium surface treated one hour in a piranha solution at a temperature of 25 °C has the lowest bacterial accumulation among all the surfaces tested. This suggests that the difference observed in bacterial adhesion between the different surfaces is due primarily to surface topography.

  1. Evidence of antibacterial activity on titanium surfaces through nanotextures

    International Nuclear Information System (INIS)

    Seddiki, O.; Harnagea, C.; Levesque, L.; Mantovani, D.; Rosei, F.

    2014-01-01

    Nosocomial infections (Nis) are a major concern for public health. As more and more of the pathogens responsible for these infections are antibiotic resistant, finding new ways to overcome them is a major challenge for biomedical research. We present a method to reduce Nis spreading by hindering bacterial adhesion in its very early stage. This is achieved by reducing the contact interface area between the bacterium and the surface by nanoengineering the surface topography. In particular, we studied the Escheria Coli adhesion on titanium surfaces exhibiting different morphologies, that were obtained by a combination of mechanical polishing and chemical etching. Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM) characterization revealed that the titanium surface is modified at both micro- and nano-scale. X-ray Photoelectron Spectroscopy (XPS) revealed that the surfaces have the same composition before and after piranha treatment, consisting mainly of TiO 2 . Adhesion tests showed a significant reduction in bacterial accumulation on nanostructured surfaces that had the lowest roughness over large areas. SEM images acquired after bacterial culture on different titanium substrates confirmed that the polished titanium surface treated one hour in a piranha solution at a temperature of 25 °C has the lowest bacterial accumulation among all the surfaces tested. This suggests that the difference observed in bacterial adhesion between the different surfaces is due primarily to surface topography.

  2. Opportunities in the electrowinning of molten titanium from titanium dioxide

    CSIR Research Space (South Africa)

    Van Vuuren, DS

    2005-10-01

    Full Text Available used, the following forms of titanium are produced: titanium sponge, sintered electrode sponge, powder, molten titanium, electroplated titanium, hydride powder, and vapor-phase depos- ited titanium. Comparing the economics of alter- native...-up for producing titanium via the Kroll process is approximately as follows: ilmenite ($0.27/kg titanium sponge); titanium slag ($0.75/kg titanium sponge); TiCl4 ($3.09/kg titanium sponge); titanium sponge raw materials costs ($5.50/kg titanium sponge); total...

  3. Trial manufacturing of titanium-carbon steel composite overpack

    International Nuclear Information System (INIS)

    Honma, Nobuyuki; Chiba, Takahiko; Tanai, Kenji

    1999-11-01

    This paper reports the results of design analysis and trial manufacturing of full-scale titanium-carbon steel composite overpacks. The overpack is one of the key components of the engineered barrier system, hence, it is necessary to confirm the applicability of current technique in their manufacture. The required thickness was calculated according to mechanical resistance analysis, based on models used in current nuclear facilities. The Adequacy of the calculated dimensions was confirmed by finite-element methods. To investigate the necessity of a radiation shielding function of the overpack, the irradiation from vitrified waste has been calculated. As a result, it was shown that shielding on handling and transport equipment is a more reasonable and practical approach than to increase thickness of overpack to attain a self-shielding capability. After the above investigation, trial manufacturing of full-scale model of titanium-carbon steel composite overpack has been carried out. For corrosion-resistant material, ASTM Grade-2 titanium was selected. The titanium layer was bonded individually to a cylindrical shell and fiat cover plates (top and bottom) made of carbon steel. For the cylindrical shell portion, a cylindrically formed titanium layer was fitted to the inner carbon steel vessel by shrinkage. For the flat cover plates (top and bottom), titanium plate material was coated by explosive bonding. Electron beam welding and gas metal arc welding were combined to weld of the cover plates to the body. No significant failure was evident from inspections of the fabrication process, and the applicability of current technology for manufacturing titanium-carbon steel composite overpack was confirmed. Future research and development items regarding titanium-carbon steel composite overpacks are also discussed. (author)

  4. Environment and host as large-scale controls of ectomycorrhizal fungi.

    Science.gov (United States)

    van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I

    2018-06-06

    Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.

  5. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  6. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  7. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  8. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  9. The method of measurement and synchronization control for large-scale complex loading system

    International Nuclear Information System (INIS)

    Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo

    2012-01-01

    With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)

  10. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  11. Assembly and control of large microtubule complexes

    Science.gov (United States)

    Korolev, Kirill; Ishihara, Keisuke; Mitchison, Timothy

    Motility, division, and other cellular processes require rapid assembly and disassembly of microtubule structures. We report a new mechanism for the formation of asters, radial microtubule complexes found in very large cells. The standard model of aster growth assumes elongation of a fixed number of microtubules originating from the centrosomes. However, aster morphology in this model does not scale with cell size, and we found evidence for microtubule nucleation away from centrosomes. By combining polymerization dynamics and auto-catalytic nucleation of microtubules, we developed a new biophysical model of aster growth. The model predicts an explosive transition from an aster with a steady-state radius to one that expands as a travelling wave. At the transition, microtubule density increases continuously, but aster growth rate discontinuously jumps to a nonzero value. We tested our model with biochemical perturbations in egg extract and confirmed main theoretical predictions including the jump in the growth rate. Our results show that asters can grow even though individual microtubules are short and unstable. The dynamic balance between microtubule collapse and nucleation could be a general framework for the assembly and control of large microtubule complexes. NIH GM39565; Simons Foundation 409704; Honjo International 486 Scholarship Foundation.

  12. Etoile Project : Social Intelligent ICT-System for very large scale education in complex systems

    Science.gov (United States)

    Bourgine, P.; Johnson, J.

    2009-04-01

    The project will devise new theory and implement new ICT-based methods of delivering high-quality low-cost postgraduate education to many thousands of people in a scalable way, with the cost of each extra student being negligible (Socially Intelligent Resource Mining system to gather large volumes of high quality educational resources from the internet; new methods to deconstruct these to produce a semantically tagged Learning Object Database; a Living Course Ecology to support the creation and maintenance of evolving course materials; systems to deliver courses; and a ‘socially intelligent assessment system'. The system will be tested on one to ten thousand postgraduate students in Europe working towards the Complex System Society's title of European PhD in Complex Systems. Étoile will have a very high impact both scientifically and socially by (i) the provision of new scalable ICT-based methods for providing very low cost scientific education, (ii) the creation of new mathematical and statistical theory for the multiscale dynamics of complex systems, (iii) the provision of a working example of adaptation and emergence in complex socio-technical systems, and (iv) making a major educational contribution to European complex systems science and its applications.

  13. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Science.gov (United States)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water

  14. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    Directory of Open Access Journals (Sweden)

    T. Wolf-Grosse

    2017-06-01

    Full Text Available Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s−1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES experiments with the Parallelised Large-Eddy Simulation Model (PALM for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a

  15. Electrochemical deposition of carbon films on titanium in molten LiCl–KCl–K2CO3

    International Nuclear Information System (INIS)

    Song, Qiushi; Xu, Qian; Wang, Yang; Shang, Xujing; Li, Zaiyuan

    2012-01-01

    Electrodeposition of carbon films on the oxide-scale-coated titanium has been performed in a LiCl–KCl–K 2 CO 3 melt, which are characterized by scanning electron microscopy, Raman spectroscopy and X-ray diffraction analysis. The electrochemical process of carbon deposition is investigated by cyclic voltammetry on the graphite, titanium and oxide-scale-coated titanium electrodes. The particle-size-gradient carbon films over the oxide-scale-coated titanium can be achieved by electrodeposition under the controlled potentials for avoiding codeposition of lithium carbide. The deposited carbon films are comprised of micron-sized ‘quasi-spherical’ carbon particles with graphitized and amorphous phases. The cyclic voltammetry behavior on the graphite, titanium and oxide-scale-coated titanium electrodes shows that CO 3 2− ions are reduced most favorably on the graphite for the three electrodes. Lithium ions can discharge under the less negative potential on the electrode containing carbon compared with titanium electrode because of the formation of lithium carbide from the reaction between lithium and carbon. - Highlights: ► Carbon films are prepared on oxide-scale-coated titanium in a LiCl–KCl–K 2 CO 3 melt. ► The films comprise micron-size ‘quasi-spherical’ carbon particles. ► The films present particle-size-gradient. ► The particles contain graphitized and amorphous phases. ► The prepared carbon films are more electrochemically active than graphite.

  16. Building Participation in Large-scale Conservation: Lessons from Belize and Panama

    Directory of Open Access Journals (Sweden)

    Jesse Guite Hastings

    2015-01-01

    Full Text Available Motivated by biogeography and a desire for alignment with the funding priorities of donors, the twenty-first century has seen big international NGOs shifting towards a large-scale conservation approach. This shift has meant that even before stakeholders at the national and local scale are involved, conservation programmes often have their objectives defined and funding allocated. This paper uses the experiences of Conservation International′s Marine Management Area Science (MMAS programme in Belize and Panama to explore how to build participation at the national and local scale while working within the bounds of the current conservation paradigm. Qualitative data about MMAS was gathered through a multi-sited ethnographic research process, utilising document review, direct observation, and semi-structured interviews with 82 informants in Belize, Panama, and the United States of America. Results indicate that while a large-scale approach to conservation disadvantages early national and local stakeholder participation, this effect can be mediated through focusing engagement efforts, paying attention to context, building horizontal and vertical partnerships, and using deliberative processes that promote learning. While explicit consideration of geopolitics and local complexity alongside biogeography in the planning phase of a large-scale conservation programme is ideal, actions taken by programme managers during implementation can still have a substantial impact on conservation outcomes.

  17. Titanium by design: TRIP titanium alloy

    Science.gov (United States)

    Tran, Jamie

    Motivated by the prospect of lower cost Ti production processes, new directions in Ti alloy design were explored for naval and automotive applications. Building on the experience of the Steel Research Group at Northwestern University, an analogous design process was taken with titanium. As a new project, essential kinetic databases and models were developed for the design process and used to create a prototype design. Diffusion kinetic models were developed to predict the change in phase compositions and microstructure during heat treatment. Combining a mobility database created in this research with a licensed thermodynamic database, ThermoCalc and DICTRA software was used to model kinetic compositional changes in titanium alloys. Experimental diffusion couples were created and compared to DICTRA simulations to refine mobility parameters in the titanium mobility database. The software and database were able to predict homogenization times and the beta→alpha plate thickening kinetics during cooling in the near-alpha Ti5111 alloy. The results of these models were compared to LEAP microanalysis and found to be in reasonable agreement. Powder metallurgy was explored using SPS at GM R&D to reduce the cost of titanium alloys. Fully dense Ti5111 alloys were produced and achieved similar microstructures to wrought Ti5111. High levels of oxygen in these alloys increased the strength while reducing the ductility. Preliminary Ti5111+Y alloys were created, where yttrium additions successfully gettered excess oxygen to create oxides. However, undesirable large oxides formed, indicating more research is needed into the homogeneous distribution of the yttrium powder to create finer oxides. Principles established in steels were used to optimize the beta phase transformation stability for martensite transformation toughening in titanium alloys. The Olson-Cohen kinetic model is calibrated to shear strains in titanium. A frictional work database is established for common alloying

  18. Multidimensional scaling for large genomic data sets

    Directory of Open Access Journals (Sweden)

    Lu Henry

    2008-04-01

    Full Text Available Abstract Background Multi-dimensional scaling (MDS is aimed to represent high dimensional data in a low dimensional space with preservation of the similarities between data points. This reduction in dimensionality is crucial for analyzing and revealing the genuine structure hidden in the data. For noisy data, dimension reduction can effectively reduce the effect of noise on the embedded structure. For large data set, dimension reduction can effectively reduce information retrieval complexity. Thus, MDS techniques are used in many applications of data mining and gene network research. However, although there have been a number of studies that applied MDS techniques to genomics research, the number of analyzed data points was restricted by the high computational complexity of MDS. In general, a non-metric MDS method is faster than a metric MDS, but it does not preserve the true relationships. The computational complexity of most metric MDS methods is over O(N2, so that it is difficult to process a data set of a large number of genes N, such as in the case of whole genome microarray data. Results We developed a new rapid metric MDS method with a low computational complexity, making metric MDS applicable for large data sets. Computer simulation showed that the new method of split-and-combine MDS (SC-MDS is fast, accurate and efficient. Our empirical studies using microarray data on the yeast cell cycle showed that the performance of K-means in the reduced dimensional space is similar to or slightly better than that of K-means in the original space, but about three times faster to obtain the clustering results. Our clustering results using SC-MDS are more stable than those in the original space. Hence, the proposed SC-MDS is useful for analyzing whole genome data. Conclusion Our new method reduces the computational complexity from O(N3 to O(N when the dimension of the feature space is far less than the number of genes N, and it successfully

  19. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  20. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  1. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  2. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Science.gov (United States)

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  3. Lubrication for hot working of titanium alloys

    International Nuclear Information System (INIS)

    Gotlib, B.M.

    1980-01-01

    The isothermal lubrication of the following composition is suggested, wt. %: aluminium powder 4-6, iron scale 15-25, vitreous enamel up to 100. The lubricant improves forming and decreases the danger of the metal fracture when titanium alloys working. It is advisable to use the suggested lubrication when stamping thin-walled products of titanium alloys at the blank temperature from 700 to 1000 deg C [ru

  4. An Insoluble Titanium-Lead Anode for Sulfate Electrolytes

    Energy Technology Data Exchange (ETDEWEB)

    Ferdman, Alla

    2005-05-11

    The project is devoted to the development of novel insoluble anodes for copper electrowinning and electrolytic manganese dioxide (EMD) production. The anodes are made of titanium-lead composite material produced by techniques of powder metallurgy, compaction of titanium powder, sintering and subsequent lead infiltration. The titanium-lead anode combines beneficial electrochemical behavior of a lead anode with high mechanical properties and corrosion resistance of a titanium anode. In the titanium-lead anode, the titanium stabilizes the lead, preventing it from spalling, and the lead sheathes the titanium, protecting it from passivation. Interconnections between manufacturing process, structure, composition and properties of the titanium-lead composite material were investigated. The material containing 20-30 vol.% of lead had optimal combination of mechanical and electrochemical properties. Optimal process parameters to manufacture the anodes were identified. Prototypes having optimized composition and structure were produced for testing in operating conditions of copper electrowinning and EMD production. Bench-scale, mini-pilot scale and pilot scale tests were performed. The test anodes were of both a plate design and a flow-through cylindrical design. The cylindrical anodes were composed of cylinders containing titanium inner rods and fitting over titanium-lead bushings. The cylindrical design allows the electrolyte to flow through the anode, which enhances diffusion of the electrolyte reactants. The cylindrical anodes demonstrate higher mass transport capabilities and increased electrical efficiency compared to the plate anodes. Copper electrowinning represents the primary target market for the titanium-lead anode. A full-size cylindrical anode performance in copper electrowinning conditions was monitored over a year. The test anode to cathode voltage was stable in the 1.8 to 2.0 volt range. Copper cathode morphology was very smooth and uniform. There was no

  5. The Proposal of Scaling the Roles in Scrum of Scrums for Distributed Large Projects

    OpenAIRE

    Abeer M. AlMutairi; M. Rizwan Jameel Qureshi

    2015-01-01

    Scrum of scrums is an approach used to scale the traditional Scrum methodology to fit for the development of complex and large projects. However, scaling the roles of scrum members brought new challenges especially in distributed and large software projects. This paper describes in details the roles of each scrum member in scrum of scrum to propose a solution to use a dedicated product owner for a team and inclusion of sub-backlog. The main goal of the proposed solution i...

  6. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  7. Collisionless magnetic reconnection in large-scale electron-positron plasmas

    International Nuclear Information System (INIS)

    Daughton, William; Karimabadi, Homa

    2007-01-01

    One of the most fundamental questions in reconnection physics is how the dynamical evolution will scale to macroscopic systems of physical relevance. This issue is examined for electron-positron plasmas using two-dimensional fully kinetic simulations with both open and periodic boundary conditions. The resulting evolution is complex and highly dynamic throughout the entire duration. The initial phase is distinguished by the coalescence of tearing islands to larger scale while the later phase is marked by the expansion of diffusion regions into elongated current layers that are intrinsically unstable to plasmoid generation. It appears that the repeated formation and ejection of plasmoids plays a key role in controlling the average structure of a diffusion region and preventing the further elongation of the layer. The reconnection rate is modulated in time as the current layers expand and new plasmoids are formed. Although the specific details of this evolution are affected by the boundary and initial conditions, the time averaged reconnection rate remains fast and is remarkably insensitive to the system size for sufficiently large systems. This dynamic scenario offers an alternative explanation for fast reconnection in large-scale systems

  8. LARGE-SCALE CO MAPS OF THE LUPUS MOLECULAR CLOUD COMPLEX

    International Nuclear Information System (INIS)

    Tothill, N. F. H.; Loehr, A.; Stark, A. A.; Lane, A. P.; Harnett, J. I.; Bourke, T. L.; Myers, P. C.; Parshley, S. C.; Wright, G. A.; Walker, C. K.

    2009-01-01

    Fully sampled degree-scale maps of the 13 CO 2-1 and CO 4-3 transitions toward three members of the Lupus Molecular Cloud Complex-Lupus I, III, and IV-trace the column density and temperature of the molecular gas. Comparison with IR extinction maps from the c2d project requires most of the gas to have a temperature of 8-10 K. Estimates of the cloud mass from 13 CO emission are roughly consistent with most previous estimates, while the line widths are higher, around 2 km s -1 . CO 4-3 emission is found throughout Lupus I, indicating widespread dense gas, and toward Lupus III and IV. Enhanced line widths at the NW end and along the edge of the B 228 ridge in Lupus I, and a coherent velocity gradient across the ridge, are consistent with interaction between the molecular cloud and an expanding H I shell from the Upper-Scorpius subgroup of the Sco-Cen OB Association. Lupus III is dominated by the effects of two HAe/Be stars, and shows no sign of external influence. Slightly warmer gas around the core of Lupus IV and a low line width suggest heating by the Upper-Centaurus-Lupus subgroup of Sco-Cen, without the effects of an H I shell.

  9. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  10. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  11. Titanium condenser tubes--problems and their solutions for wider application to large surface condensers

    Energy Technology Data Exchange (ETDEWEB)

    Sato, S; Sugiyama, Y; Nagata, K; Namba, K; Shimono, M

    1978-01-01

    To meet the demand for high reliability condensers for thermal and nuclear power plants, especially for PWR plants, the condensers installed entirely with titanium tubes have been investigated and used. Some difficulties from conventional copper alloy tubes exist. Further investigations are necessary on three items: (1) tube vibration; (2) joining tubes to tube plate; (3) fouling (bio-fouling) control. Literature survey on the tube vibration suggests that the probability of tube vibration due to decreased stiffness of titanium tubes in comparison with conventional copper alloy tubes can be decreased by designing the proper span length between supports. Experiments on seal welding of tubes to a tube plate have successfully proved that pulsed TIG arc welding is applicable to get reliable and strong joints, even on site, by suitable countermeasures. Experiments on the fouling (bio-fouling) of titanium tubes in marine application reveal that the increased fouling of titanium tubes could be controlled by proper application of sponge ball cleaning.

  12. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  13. Direct Metal Laser Sintering Titanium Dental Implants: A Review of the Current Literature

    Science.gov (United States)

    Mangano, F.; Chambrone, L.; van Noort, R.; Miller, C.; Hatton, P.; Mangano, C.

    2014-01-01

    Statement of Problem. Direct metal laser sintering (DMLS) is a technology that allows fabrication of complex-shaped objects from powder-based materials, according to a three-dimensional (3D) computer model. With DMLS, it is possible to fabricate titanium dental implants with an inherently porous surface, a key property required of implantation devices. Objective. The aim of this review was to evaluate the evidence for the reliability of DMLS titanium dental implants and their clinical and histologic/histomorphometric outcomes, as well as their mechanical properties. Materials and Methods. Electronic database searches were performed. Inclusion criteria were clinical and radiographic studies, histologic/histomorphometric studies in humans and animals, mechanical evaluations, and in vitro cell culture studies on DMLS titanium implants. Meta-analysis could be performed only for randomized controlled trials (RCTs); to evaluate the methodological quality of observational human studies, the Newcastle-Ottawa scale (NOS) was used. Results. Twenty-seven studies were included in this review. No RCTs were found, and meta-analysis could not be performed. The outcomes of observational human studies were assessed using the NOS: these studies showed medium methodological quality. Conclusions. Several studies have demonstrated the potential for the use of DMLS titanium implants. However, further studies that demonstrate the benefits of DMLS implants over conventional implants are needed. PMID:25525434

  14. Direct metal laser sintering titanium dental implants: a review of the current literature.

    Science.gov (United States)

    Mangano, F; Chambrone, L; van Noort, R; Miller, C; Hatton, P; Mangano, C

    2014-01-01

    Statement of Problem. Direct metal laser sintering (DMLS) is a technology that allows fabrication of complex-shaped objects from powder-based materials, according to a three-dimensional (3D) computer model. With DMLS, it is possible to fabricate titanium dental implants with an inherently porous surface, a key property required of implantation devices. Objective. The aim of this review was to evaluate the evidence for the reliability of DMLS titanium dental implants and their clinical and histologic/histomorphometric outcomes, as well as their mechanical properties. Materials and Methods. Electronic database searches were performed. Inclusion criteria were clinical and radiographic studies, histologic/histomorphometric studies in humans and animals, mechanical evaluations, and in vitro cell culture studies on DMLS titanium implants. Meta-analysis could be performed only for randomized controlled trials (RCTs); to evaluate the methodological quality of observational human studies, the Newcastle-Ottawa scale (NOS) was used. Results. Twenty-seven studies were included in this review. No RCTs were found, and meta-analysis could not be performed. The outcomes of observational human studies were assessed using the NOS: these studies showed medium methodological quality. Conclusions. Several studies have demonstrated the potential for the use of DMLS titanium implants. However, further studies that demonstrate the benefits of DMLS implants over conventional implants are needed.

  15. Direct Metal Laser Sintering Titanium Dental Implants: A Review of the Current Literature

    Directory of Open Access Journals (Sweden)

    F. Mangano

    2014-01-01

    Full Text Available Statement of Problem. Direct metal laser sintering (DMLS is a technology that allows fabrication of complex-shaped objects from powder-based materials, according to a three-dimensional (3D computer model. With DMLS, it is possible to fabricate titanium dental implants with an inherently porous surface, a key property required of implantation devices. Objective. The aim of this review was to evaluate the evidence for the reliability of DMLS titanium dental implants and their clinical and histologic/histomorphometric outcomes, as well as their mechanical properties. Materials and Methods. Electronic database searches were performed. Inclusion criteria were clinical and radiographic studies, histologic/histomorphometric studies in humans and animals, mechanical evaluations, and in vitro cell culture studies on DMLS titanium implants. Meta-analysis could be performed only for randomized controlled trials (RCTs; to evaluate the methodological quality of observational human studies, the Newcastle-Ottawa scale (NOS was used. Results. Twenty-seven studies were included in this review. No RCTs were found, and meta-analysis could not be performed. The outcomes of observational human studies were assessed using the NOS: these studies showed medium methodological quality. Conclusions. Several studies have demonstrated the potential for the use of DMLS titanium implants. However, further studies that demonstrate the benefits of DMLS implants over conventional implants are needed.

  16. Tribological investigations of perfluoroalkylsilanes monolayers deposited on titanium surface

    International Nuclear Information System (INIS)

    Cichomski, Michał

    2012-01-01

    Therefore the present work reports a systematic study of titanium modification by fluoroalkylsilanes and surface characterization from the tribological point of view. The vapor phase deposition method was used to modify titanium surfaces by fluoroalkylsilanes and the influence of the used modifier on the tribological properties is presented. The modification procedure efficiency, surface structure and morphology were characterized by secondary ion mass spectrometry, infrared spectroscopy and atomic force microscopy. The effectiveness of modification of the titanium surface was monitored by the measurement of the wetting contact angle and the surface free energy. The increase of surface hydrophobicity was observed upon the modification by increasing the wetting contact angle and reducing the surface free energy. The tribological performance of various perfluoroalkylsilanes films on the titanium surface was investigated in mili- and nano-newton load ranges. Dependence of the adhesive force and coefficient of friction values obtained in nano- and micro-scale on fluoroalkyl chain length was observed. Nano- and micro-tribological measurements show that titanium modified by fluoroalkylsilanes has lower adhesion and coefficient of friction than unmodified one. The investigation also indicates a decrease of the friction coefficient with increasing fluoric alkyl chain length. It was found that the titanium modified by fluoroalkylsilanes with longer alkyl chains is a prime candidate for practical use as a lubricant in biomedical and electronic applications. -- Highlights: ► Titanium surface modification by perfluoroalkylsilanes was investigated. ► The effectiveness of modification was monitored by the surface free energy. ► The modification procedure correctness was characterized by ToF-SIMS, AFM, FT-IR measurements. ► The tribological performance of modified titanium in differed scale was studied.

  17. Current assisted superplastic forming of titanium alloy

    Directory of Open Access Journals (Sweden)

    Wang Guofeng

    2015-01-01

    Full Text Available Current assisted superplastic forming combines electric heating technology and superplastic forming technology, and can overcome some shortcomings of traditional superplastic forming effectively, such as slow heating rate, large energy loss, low production efficiency, etc. Since formability of titanium alloy at room temperature is poor, current assisted superplastic forming is suitable for titanium alloy. This paper mainly introduces the application of current assisted superplastic forming in the field of titanium alloy, including forming technology of double-hemisphere structure and bellows.

  18. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  19. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  20. Non-gut baryogenesis and large scale structure of the universe

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    1995-07-01

    We discuss a mechanism for generating baryon density perturbations and study the evolution of the baryon charge density distribution in the framework of the low temperature baryogenesis scenario. This mechanism may be important for the large scale structure formation of the Universe and particularly, may be essential for understanding the existence of a characteristic scale of 130h -1 Mpc in the distribution of the visible matter. The detailed analysis showed that both the observed very large scale of the visible matter distribution in the Universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, according to our model, at present the visible part of the Universe may consist of baryonic and antibaryonic shells, sufficiently separated, so that annihilation radiation is not observed. This is an interesting possibility as far as the observational data of antiparticles in cosmic rays do not rule out the possibility of antimatter superclusters in the Universe. (author). 16 refs, 3 figs

  1. Titanium-Phosphonate-Based Metal-Organic Frameworks with Hierarchical Porosity for Enhanced Photocatalytic Hydrogen Evolution

    KAUST Repository

    Li, Hui

    2018-02-01

    Photocatalytic hydrogen production is crucial for solar-to-chemical conversion process, wherein high-efficiency photocatalysts lie in the heart of this area. Herein a new photocatalyst of hierarchically mesoporous titanium-phosphonate-based metal-organic frameworks, featuring well-structured spheres, periodic mesostructure and large secondary mesoporosity, are rationally designed with the complex of polyelectrolyte and cathodic surfactant serving as the template. The well-structured hierarchical porosity and homogeneously incorporated phosphonate groups can favor the mass transfer and strong optical absorption during the photocatalytic reactions. Correspondingly, the titanium phosphonates exhibit significantly improved photocatalytic hydrogen evolution rate along with impressive stability. This work can provide more insights into designing advanced photocatalysts for energy conversion and render a tunable platform in photoelectrochemical field.

  2. Titanium-Phosphonate-Based Metal-Organic Frameworks with Hierarchical Porosity for Enhanced Photocatalytic Hydrogen Evolution

    KAUST Repository

    Li, Hui; Sun, Ying; Yuan, Zhong-Yong; Zhu, Yun-Pei; Ma, Tianyi

    2018-01-01

    Photocatalytic hydrogen production is crucial for solar-to-chemical conversion process, wherein high-efficiency photocatalysts lie in the heart of this area. Herein a new photocatalyst of hierarchically mesoporous titanium-phosphonate-based metal-organic frameworks, featuring well-structured spheres, periodic mesostructure and large secondary mesoporosity, are rationally designed with the complex of polyelectrolyte and cathodic surfactant serving as the template. The well-structured hierarchical porosity and homogeneously incorporated phosphonate groups can favor the mass transfer and strong optical absorption during the photocatalytic reactions. Correspondingly, the titanium phosphonates exhibit significantly improved photocatalytic hydrogen evolution rate along with impressive stability. This work can provide more insights into designing advanced photocatalysts for energy conversion and render a tunable platform in photoelectrochemical field.

  3. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  4. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  5. Extraction of bivalent vanadium as its pyridine thiocyanate complex and separation from uranium, titanium, chromium and aluminium

    International Nuclear Information System (INIS)

    Yatirajam, V.; Arya, S.P.

    1975-01-01

    A simple method is described for the extraction of V(II) as its pyridine thiocyanate complex. Vanadate is reduced to V(II) in 1 to 2 N sulphuric acid by zinc amalgam. Thiocyanate and pyridine are added, the solution is adjusted to pH 5.2 to 5.5 and the complex extracted with chloroform. The vanadium is back-extracted with peroxide solution. Zinc from the reductant accompanies the vanadium but alkali and alkaline earth metal ions, titanium, uranium, chromium and aluminium are separated, besides those ions reduced to the elements by zinc amalgam. The method takes about 20 min and is applicable to microgram as well as milligram amounts of vanadium. (author)

  6. Nanotubular topography enhances the bioactivity of titanium implants.

    Science.gov (United States)

    Huang, Jingyan; Zhang, Xinchun; Yan, Wangxiang; Chen, Zhipei; Shuai, Xintao; Wang, Anxun; Wang, Yan

    2017-08-01

    Surface modification on titanium implants plays an important role in promoting mesenchymal stem cell (MSC) response to enhance osseointegration persistently. In this study, nano-scale TiO 2 nanotube topography (TNT), micro-scale sand blasted-acid etched topography (SLA), and hybrid sand blasted-acid etched/nanotube topography (SLA/TNT) were fabricated on the surfaces of titanium implants. Although the initial cell adherence at 60 min among TNT, SLA and TNT/SLA was not different, SLA and SLA/TNT presented to be rougher and suppressed the proliferation of MSC. TNT showed hydrophilic surface and balanced promotion of cellular functions. After being implanted in rabbit femur models, TNT displayed the best osteogenesis inducing ability as well as strong bonding strength to the substrate. These results indicate that nano-scale TNT provides favorable surface topography for improving the clinical performance of endosseous implants compared with micro and hybrid micro/nano surfaces, suggesting a promising and reliable surface modification strategy of titanium implants for clinical application. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  8. Development of Large-Scale Spacecraft Fire Safety Experiments

    DEFF Research Database (Denmark)

    Ruff, Gary A.; Urban, David L.; Fernandez-Pello, A. Carlos

    2013-01-01

    exploration missions outside of low-earth orbit and accordingly, more complex in terms of operations, logistics, and safety. This will increase the challenge of ensuring a fire-safe environment for the crew throughout the mission. Based on our fundamental uncertainty of the behavior of fires in low...... of the spacecraft fire safety risk. The activity of this project is supported by an international topical team of fire experts from other space agencies who conduct research that is integrated into the overall experiment design. The large-scale space flight experiment will be conducted in an Orbital Sciences...

  9. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  10. Super titanium blades for advanced steam turbines

    International Nuclear Information System (INIS)

    Coulon, P.A.

    1990-01-01

    In 1986, the Alsthom Steam Turbines Department launched the manufacture of large titanium alloy blades: airfoil length of 1360 mm and overall length of 1520 mm. These blades are designed for the last-stage low pressure blading of advanced steam turbines operating at full speed (3000 rpm) and rating between 300 and 800 MW. Using titanium alloys for steam turbine exhaust stages as substitutes for chrome steels, due to their high strength/density ratio and their almost complete resistance to corrosion, makes it possible to increase the length of blades significantly and correspondingly that steam passage section (by up to 50%) with a still conservative stresses level in the rotor. Alsthom relies on 8 years of experience in the field of titanium, since as early as 1979 large titanium blades (airfoil length of 1240 mm, overall length of 1430 mm) were erected for experimental purposes on the last stage of a 900 MW unit of the Dampierre-sur-Loire power plant and now totals 45,000 operating hours without problems. The paper summarizes the main properties (chemical, mechanical and structural) recorded on very large blades and is based in particular on numerous fatigue corrosion test results to justify the use of the Ti 6 Al 4 V alloy in a specific context of micrographic structure

  11. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-04-30

    We explore visualization and abstraction approaches to represent neuronal data. Neuroscientists acquire electron microscopy volumes to reconstruct a complete wiring diagram of the neurons in the brain, called the connectome. This will be crucial to understanding brains and their development. However, the resulting data is complex and large, posing a big challenge to existing visualization techniques in terms of clarity and scalability. We describe solutions to tackle the problems of scalability and cluttered presentation. We first show how a query-guided interactive approach to visual exploration can reduce the clutter and help neuroscientists explore their data dynamically. We use a knowledge-based query algebra that facilitates the interactive creation of queries. This allows neuroscientists to pose domain-specific questions related to their research. Simple queries can be combined to form complex queries to answer more sophisticated questions. We then show how visual abstractions from 3D to 2D can significantly reduce the visual clutter and add clarity to the visualization so that scientists can focus more on the analysis. We abstract the topology of 3D neurons into a multi-scale, relative distance-preserving subway map visualization that allows scientists to interactively explore the morphological and connectivity features of neuronal cells. We then focus on the process of acquisition, where neuroscientists segment electron microscopy images to reconstruct neurons. The segmentation process of such data is tedious, time-intensive, and usually performed using a diverse set of tools. We present a novel web-based visualization system for tracking the state, progress, and evolution of segmentation data in neuroscience. Our multi-user system seamlessly integrates a diverse set of tools. Our system provides support for the management, provenance, accountability, and auditing of large-scale segmentations. Finally, we present a novel architecture to render very large

  12. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  13. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  14. Tenant Placement Strategies within Multi-Level Large-Scale Shopping Centers

    OpenAIRE

    Tony Shun-Te Yuo; Colin Lizieri

    2013-01-01

    This paper argues that tenant placement strategies for large-scale multi-unit shopping centers differ depending on the number of floor levels. Two core strategies are identified: dispersion and departmentalization. There exists a trade-off between three income effects: basic footfall effects, spillover effects, and an effective floor area effect, which varies by the number of floor levels. Departmentalization is favored for centers with more than four floors. Greater spatial complexity also p...

  15. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    Science.gov (United States)

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  16. Prediction of monthly rainfall on homogeneous monsoon regions of India based on large scale circulation patterns using Genetic Programming

    Science.gov (United States)

    Kashid, Satishkumar S.; Maity, Rajib

    2012-08-01

    SummaryPrediction of Indian Summer Monsoon Rainfall (ISMR) is of vital importance for Indian economy, and it has been remained a great challenge for hydro-meteorologists due to inherent complexities in the climatic systems. The Large-scale atmospheric circulation patterns from tropical Pacific Ocean (ENSO) and those from tropical Indian Ocean (EQUINOO) are established to influence the Indian Summer Monsoon Rainfall. The information of these two large scale atmospheric circulation patterns in terms of their indices is used to model the complex relationship between Indian Summer Monsoon Rainfall and the ENSO as well as EQUINOO indices. However, extracting the signal from such large-scale indices for modeling such complex systems is significantly difficult. Rainfall predictions have been done for 'All India' as one unit, as well as for five 'homogeneous monsoon regions of India', defined by Indian Institute of Tropical Meteorology. Recent 'Artificial Intelligence' tool 'Genetic Programming' (GP) has been employed for modeling such problem. The Genetic Programming approach is found to capture the complex relationship between the monthly Indian Summer Monsoon Rainfall and large scale atmospheric circulation pattern indices - ENSO and EQUINOO. Research findings of this study indicate that GP-derived monthly rainfall forecasting models, that use large-scale atmospheric circulation information are successful in prediction of All India Summer Monsoon Rainfall with correlation coefficient as good as 0.866, which may appears attractive for such a complex system. A separate analysis is carried out for All India Summer Monsoon rainfall for India as one unit, and five homogeneous monsoon regions, based on ENSO and EQUINOO indices of months of March, April and May only, performed at end of month of May. In this case, All India Summer Monsoon Rainfall could be predicted with 0.70 as correlation coefficient with somewhat lesser Correlation Coefficient (C.C.) values for different

  17. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  18. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  19. Sustainability Risk Evaluation for Large-Scale Hydropower Projects with Hybrid Uncertainty

    Directory of Open Access Journals (Sweden)

    Weiyao Tang

    2018-01-01

    Full Text Available As large-scale hydropower projects are influenced by many factors, risk evaluations are complex. This paper considers a hydropower project as a complex system from the perspective of sustainability risk, and divides it into three subsystems: the natural environment subsystem, the eco-environment subsystem and the socioeconomic subsystem. Risk-related factors and quantitative dimensions of each subsystem are comprehensively analyzed considering uncertainty of some quantitative dimensions solved by hybrid uncertainty methods, including fuzzy (e.g., the national health degree, the national happiness degree, the protection of cultural heritage, random (e.g., underground water levels, river width, and fuzzy random uncertainty (e.g., runoff volumes, precipitation. By calculating the sustainability risk-related degree in each of the risk-related factors, a sustainable risk-evaluation model is built. Based on the calculation results, the critical sustainability risk-related factors are identified and targeted to reduce the losses caused by sustainability risk factors of the hydropower project. A case study at the under-construction Baihetan hydropower station is presented to demonstrate the viability of the risk-evaluation model and to provide a reference for the sustainable risk evaluation of other large-scale hydropower projects.

  20. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  1. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  2. Lunar-derived titanium alloys for hydrogen storage

    Science.gov (United States)

    Love, S.; Hertzberg, A.; Woodcock, G.

    1992-01-01

    Hydrogen gas, which plays an important role in many projected lunar power systems and industrial processes, can be stored in metallic titanium and in certain titanium alloys as an interstitial hydride compound. Storing and retrieving hydrogen with titanium-iron alloy requires substantially less energy investment than storage by liquefaction. Metal hydride storage systems can be designed to operate at a wide range of temperatures and pressures. A few such systems have been developed for terrestrial applications. A drawback of metal hydride storage for lunar applications is the system's large mass per mole of hydrogen stored, which rules out transporting it from earth. The transportation problem can be solved by using native lunar materials, which are rich in titanium and iron.

  3. Polynomial expansion of the precoder for power minimization in large-scale MIMO systems

    KAUST Repository

    Sifaou, Houssem

    2016-07-26

    This work focuses on the downlink of a single-cell large-scale MIMO system in which the base station equipped with M antennas serves K single-antenna users. In particular, we are interested in reducing the implementation complexity of the optimal linear precoder (OLP) that minimizes the total power consumption while ensuring target user rates. As most precoding schemes, a major difficulty towards the implementation of OLP is that it requires fast inversions of large matrices at every new channel realizations. To overcome this issue, we aim at designing a linear precoding scheme providing the same performance of OLP but with lower complexity. This is achieved by applying the truncated polynomial expansion (TPE) concept on a per-user basis. To get a further leap in complexity reduction and allow for closed-form expressions of the per-user weighting coefficients, we resort to the asymptotic regime in which M and K grow large with a bounded ratio. Numerical results are used to show that the proposed TPE precoding scheme achieves the same performance of OLP with a significantly lower implementation complexity. © 2016 IEEE.

  4. Analysis of x-ray diffraction pattern and complex plane impedance plot of polypyrrole/titanium dioxide nanocomposite: A simulation study

    Science.gov (United States)

    Ravikiran, Y. T.; Vijaya Kumari, S. C.

    2013-06-01

    To innovate the properties of Polypyrrole/Titanium dioxide (PPy/TiO2) nanocomposite further, it has been synthesized by chemical polymerization technique. The nanostructure and monoclinic phase of the prepared composite have been confirmed by simulating the X-ray diffraction pattern (XRD). Also, complex plane impedance plot of the composite has been simulated to find equivalent resistance capacitance circuit (RC circuit) and numerical values of R and C have been predicted.

  5. PERSEUS-HUB: Interactive and Collective Exploration of Large-Scale Graphs

    Directory of Open Access Journals (Sweden)

    Di Jin

    2017-07-01

    Full Text Available Graphs emerge naturally in many domains, such as social science, neuroscience, transportation engineering, and more. In many cases, such graphs have millions or billions of nodes and edges, and their sizes increase daily at a fast pace. How can researchers from various domains explore large graphs interactively and efficiently to find out what is ‘important’? How can multiple researchers explore a new graph dataset collectively and “help” each other with their findings? In this article, we present Perseus-Hub, a large-scale graph mining tool that computes a set of graph properties in a distributed manner, performs ensemble, multi-view anomaly detection to highlight regions that are worth investigating, and provides users with uncluttered visualization and easy interaction with complex graph statistics. Perseus-Hub uses a Spark cluster to calculate various statistics of large-scale graphs efficiently, and aggregates the results in a summary on the master node to support interactive user exploration. In Perseus-Hub, the visualized distributions of graph statistics provide preliminary analysis to understand a graph. To perform a deeper analysis, users with little prior knowledge can leverage patterns (e.g., spikes in the power-law degree distribution marked by other users or experts. Moreover, Perseus-Hub guides users to regions of interest by highlighting anomalous nodes and helps users establish a more comprehensive understanding about the graph at hand. We demonstrate our system through the case study on real, large-scale networks.

  6. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  7. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    Science.gov (United States)

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  8. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    Science.gov (United States)

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  9. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  10. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  11. Formation and characterization of titanium nitride and titanium carbide films prepared by reactive sputtering

    International Nuclear Information System (INIS)

    Sundgren, J.-E.

    1982-01-01

    Titanium has been reactively r.f. sputtered in mixed Ar-N 2 and Ar-CH 4 discharges on to substrates held at 775 K. The films obtained have been characterized by scanning electron microscopy, X-ray diffraction and by measurements of hardness and electrical resistivity. The compositions of the films have been determined using Auger electron spectroscopy. The processes occurring both on substrates and target surfaces have been studied and it is shown that the latter is of great importance for the composition and structure of deposited films. Titanium nitride films of full density and with electrical resistivity and hardness values close to those of bulk TiN were only obtained in a narrow range close to the stoichiometric composition. Titanium carbide films grown on non-biased substrates were found to have an open structure and thus a low density. A bias applied to the substrate, however, improved the quality of the films. It is also shown that the heat of formation of the compounds plays an important role in the formation of carbides and nitrides. A large value promotes the development of large grains and dense structures. (Auth.)

  12. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng

    2017-06-20

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  13. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng; Xu, Weiyu; Yang, Yang

    2017-01-01

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  14. Visual attention mitigates information loss in small- and large-scale neural codes

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  15. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  16. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  17. Characterization and Sintering of Armstrong Process Titanium Powder

    Science.gov (United States)

    Xu, Xiaoyan; Nash, Philip; Mangabhai, Damien

    2017-04-01

    Titanium and titanium alloys have a high strength to weight ratio and good corrosion resistance but also need longer time and have a higher cost on machining. Powder metallurgy offers a viable approach to produce near net-shape complex components with little or no machining. The Armstrong titanium powders are produced by direct reduction of TiCl4 vapor with liquid sodium, a process which has a relatively low cost. This paper presents a systematic research on powder characterization, mechanical properties, and sintering behavior and of Armstrong process powder metallurgy, and also discusses the sodium issue, and the advantages and disadvantages of Armstrong process powders.

  18. Abnormal binding and disruption in large scale networks involved in human partial seizures

    Directory of Open Access Journals (Sweden)

    Bartolomei Fabrice

    2013-12-01

    Full Text Available There is a marked increase in the amount of electrophysiological and neuroimaging works dealing with the study of large scale brain connectivity in the epileptic brain. Our view of the epileptogenic process in the brain has largely evolved over the last twenty years from the historical concept of “epileptic focus” to a more complex description of “Epileptogenic networks” involved in the genesis and “propagation” of epileptic activities. In particular, a large number of studies have been dedicated to the analysis of intracerebral EEG signals to characterize the dynamic of interactions between brain areas during temporal lobe seizures. These studies have reported that large scale functional connectivity is dramatically altered during seizures, particularly during temporal lobe seizure genesis and development. Dramatic changes in neural synchrony provoked by epileptic rhythms are also responsible for the production of ictal symptoms or changes in patient’s behaviour such as automatisms, emotional changes or consciousness alteration. Beside these studies dedicated to seizures, large-scale network connectivity during the interictal state has also been investigated not only to define biomarkers of epileptogenicity but also to better understand the cognitive impairments observed between seizures.

  19. Scale-dependent intrinsic entropies of complex time series.

    Science.gov (United States)

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  20. Titanium ; dream new material

    International Nuclear Information System (INIS)

    Lee, Yong Tae; Kim Seung Eon; Heoon, Yong Taek; Jung, Hui Won

    2001-11-01

    The contents of this book are history of Titanium, present situation of Titanium industry, property of Titanium alloy, types of it, development of new alloy of Titanium smelting of Titanium, cast of Titanium and heat treatment of Titanium, Titanium alloy for plane, car parts, biological health care, and sport leisure and daily life, prospect, and Titanium industrial development of Titanium in China.

  1. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  2. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  3. Ultrahighly Dispersed Titanium Oxide on Silica : Effect of Precursors on the Structure and Photocatalysis

    OpenAIRE

    Yoshida , S.; Takenaka , S.; Tanaka , T.; Funabiki , T.

    1997-01-01

    The effect of precursor on the dispersion and catalytic performance of titanium oxide supported on silica has ben investigated. The catalysts were prepared by a simple impregnation method with three kinds of titanium complexes of different ligands (bis(isopropyato)-bis(pivaroylmethanato) : DPM, acetylacetonato : ACAC, tetrakis(isopropylato) : IPRO) with the aim of preparing ultrahighly dispersed titanium oxide on silica. The XAFS study revealed that titanium species in the catalyst prepared f...

  4. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  5. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  6. Recent developments in complex scaling

    International Nuclear Information System (INIS)

    Rescigno, T.N.

    1980-01-01

    Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible

  7. Titanium Metal Powder Production by the Plasma Quench Process

    Energy Technology Data Exchange (ETDEWEB)

    R. A. Cordes; A. Donaldson

    2000-09-01

    The goals of this project included the scale-up of the titanium hydride production process to a production rate of 50 kg/hr at a purity level of 99+%. This goal was to be achieved by incrementally increasing the production capability of a series of reactor systems. This methodic approach was designed to allow Idaho Titanium Technologies to systematically address the engineering issues associated with plasma system performance, and powder collection system design and performance. With quality powder available, actual fabrication with the titanium hydride was to be pursued. Finally, with a successful titanium production system in place, the production of titanium aluminide was to be pursued by the simultaneously injection of titanium and aluminum precursors into the reactor system. Some significant accomplishments of the project are: A unique and revolutionary torch/reactor capable of withstanding temperatures up to 5000 C with high thermal efficiency has been operated. The dissociation of titanium tetrachloride into titanium powder and HC1 has been demonstrated, and a one-megawatt reactor potentially capable of producing 100 pounds per hour has been built, but not yet operated at the powder level. The removal of residual subchlorides and adsorbed HC1 and the sintering of powder to form solid bodies have been demonstrated. The production system has been operated at production rates up to 40 pounds per hour. Subsequent to the end of the project, Idaho Titanium Technologies demonstrated that titanium hydride powder can indeed be sintered into solid titanium metal at 1500 C without sintering aids.

  8. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  9. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  10. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  11. Structural-performance testing of titanium-shell lead-matrix container MM2

    Energy Technology Data Exchange (ETDEWEB)

    Hosaluk, L. J.; Barrie, J. N.

    1992-05-15

    This report describes the hydrostatic structural-performance testing of a half-scale, titanium-shell, lead-matrix container (MM2) with a large, simulated volumetric casting defect. Mechancial behaviour of the container is assessed from extensive surface-strain measurements and post-test non-destructive and destructive examinations. Measured strain data are compared briefly with analytical results from a finite-element model of a previous test prototype, MM1, and with data generated by a finite-difference computer code. Finally, procedures are recommended for more detailed analytical modelling. (auth)

  12. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  13. Electrodeposition of niobium and titanium in molten salts

    International Nuclear Information System (INIS)

    Sartori, A.F.; Chagas, H.C.

    1988-01-01

    The electrodeposition of niobium and titanium in molten fluorides from the additions of fluorine niobates and fluorine titanates of potassium is described in laboratory and pilot scale. The temperature influence, the current density and the time deposition over the current efficiency, the deposits structure and the deposits purity are studied. The conditions for niobium coating over copper and carbon steel and for titanium coating over carbon steel are also presented. (C.G.C.) [pt

  14. Determination of local constitutive properties of titanium alloy matrix in boron-modified titanium alloys using spherical indentation

    International Nuclear Information System (INIS)

    Sreeranganathan, A.; Gokhale, A.; Tamirisakandala, S.

    2008-01-01

    The constitutive properties of the titanium alloy matrix in boron-modified titanium alloys are different from those of the corresponding unreinforced alloy due to the microstructural changes resulting from the addition of boron. Experimental and finite-element analyses of spherical indentation with a large penetration depth to indenter radius ratio are used to compute the local constitutive properties of the matrix alloy. The results are compared with that of the corresponding alloy without boron, processed in the same manner

  15. Visual attention mitigates information loss in small- and large-scale neural codes.

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Software Toolchain for Large-Scale RE-NFA Construction on FPGA

    Directory of Open Access Journals (Sweden)

    Yi-Hua E. Yang

    2009-01-01

    and O(n×m memory by our software. A large number of RE-NFAs are placed onto a two-dimensional staged pipeline, allowing scalability to thousands of RE-NFAs with linear area increase and little clock rate penalty due to scaling. On a PC with a 2 GHz Athlon64 processor and 2 GB memory, our prototype software constructs hundreds of RE-NFAs used by Snort in less than 10 seconds. We also designed a benchmark generator which can produce RE-NFAs with configurable pattern complexity parameters, including state count, state fan-in, loop-back and feed-forward distances. Several regular expressions with various complexities are used to test the performance of our RE-NFA construction software.

  17. Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy

    Directory of Open Access Journals (Sweden)

    Leandros Perivolaropoulos

    2014-01-01

    Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.

  18. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  19. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  20. Titanium nanostructures for biomedical applications

    Science.gov (United States)

    Kulkarni, M.; Mazare, A.; Gongadze, E.; Perutkova, Š.; Kralj-Iglič, V.; Milošev, I.; Schmuki, P.; Iglič, A.; Mozetič, M.

    2015-02-01

    Titanium and titanium alloys exhibit a unique combination of strength and biocompatibility, which enables their use in medical applications and accounts for their extensive use as implant materials in the last 50 years. Currently, a large amount of research is being carried out in order to determine the optimal surface topography for use in bioapplications, and thus the emphasis is on nanotechnology for biomedical applications. It was recently shown that titanium implants with rough surface topography and free energy increase osteoblast adhesion, maturation and subsequent bone formation. Furthermore, the adhesion of different cell lines to the surface of titanium implants is influenced by the surface characteristics of titanium; namely topography, charge distribution and chemistry. The present review article focuses on the specific nanotopography of titanium, i.e. titanium dioxide (TiO2) nanotubes, using a simple electrochemical anodisation method of the metallic substrate and other processes such as the hydrothermal or sol-gel template. One key advantage of using TiO2 nanotubes in cell interactions is based on the fact that TiO2 nanotube morphology is correlated with cell adhesion, spreading, growth and differentiation of mesenchymal stem cells, which were shown to be maximally induced on smaller diameter nanotubes (15 nm), but hindered on larger diameter (100 nm) tubes, leading to cell death and apoptosis. Research has supported the significance of nanotopography (TiO2 nanotube diameter) in cell adhesion and cell growth, and suggests that the mechanics of focal adhesion formation are similar among different cell types. As such, the present review will focus on perhaps the most spectacular and surprising one-dimensional structures and their unique biomedical applications for increased osseointegration, protein interaction and antibacterial properties.

  1. Titanium nanostructures for biomedical applications

    International Nuclear Information System (INIS)

    Kulkarni, M; Gongadze, E; Perutkova, Š; A Iglič; Mazare, A; Schmuki, P; Kralj-Iglič, V; Milošev, I; Mozetič, M

    2015-01-01

    Titanium and titanium alloys exhibit a unique combination of strength and biocompatibility, which enables their use in medical applications and accounts for their extensive use as implant materials in the last 50 years. Currently, a large amount of research is being carried out in order to determine the optimal surface topography for use in bioapplications, and thus the emphasis is on nanotechnology for biomedical applications. It was recently shown that titanium implants with rough surface topography and free energy increase osteoblast adhesion, maturation and subsequent bone formation. Furthermore, the adhesion of different cell lines to the surface of titanium implants is influenced by the surface characteristics of titanium; namely topography, charge distribution and chemistry. The present review article focuses on the specific nanotopography of titanium, i.e. titanium dioxide (TiO 2 ) nanotubes, using a simple electrochemical anodisation method of the metallic substrate and other processes such as the hydrothermal or sol-gel template. One key advantage of using TiO 2 nanotubes in cell interactions is based on the fact that TiO 2 nanotube morphology is correlated with cell adhesion, spreading, growth and differentiation of mesenchymal stem cells, which were shown to be maximally induced on smaller diameter nanotubes (15 nm), but hindered on larger diameter (100 nm) tubes, leading to cell death and apoptosis. Research has supported the significance of nanotopography (TiO 2 nanotube diameter) in cell adhesion and cell growth, and suggests that the mechanics of focal adhesion formation are similar among different cell types. As such, the present review will focus on perhaps the most spectacular and surprising one-dimensional structures and their unique biomedical applications for increased osseointegration, protein interaction and antibacterial properties. (topical review)

  2. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  3. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  4. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  5. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data

    Directory of Open Access Journals (Sweden)

    Sungho Won

    2015-01-01

    Full Text Available Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called “large P and small N” problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  6. Fatigue-crack propagation in gamma-based titanium aluminide alloys at large and small crack sizes

    International Nuclear Information System (INIS)

    Kruzic, J.J.; Campbell, J.P.; Ritchie, R.O.

    1999-01-01

    Most evaluations of the fracture and fatigue-crack propagation properties of γ+α 2 titanium aluminide alloys to date have been performed using standard large-crack samples, e.g., compact-tension specimens containing crack sizes which are on the order of tens of millimeters, i.e., large compared to microstructural dimensions. However, these alloys have been targeted for applications, such as blades in gas-turbine engines, where relevant crack sizes are much smaller ( 5 mm) and (c ≅ 25--300 microm) cracks in a γ-TiAl based alloy, of composition Ti-47Al-2Nb-2Cr-0.2B (at.%), specifically for duplex (average grain size approximately17 microm) and refined lamellar (average colony size ≅150 microm) microstructures. It is found that, whereas the lamellar microstructure displays far superior fracture toughness and fatigue-crack growth resistance in the presence of large cracks, in small-crack testing the duplex microstructure exhibits a better combination of properties. The reasons for such contrasting behavior are examined in terms of the intrinsic and extrinsic (i.e., crack bridging) contributions to cyclic crack advance

  7. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    Science.gov (United States)

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  8. Results of Large-Scale Spacecraft Flammability Tests

    Science.gov (United States)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot

  9. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  10. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  11. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  12. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki; Beyer, Johanna; Hadwiger, Markus; Blue, Rusty; Law, Charles; Vá zquez Reina, Amelio; Reid, Rollie Clay; Lichtman, Jeff W M D; Pfister, Hanspeter

    2010-01-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  13. Ssecrett and neuroTrace: Interactive visualization and analysis tools for large-scale neuroscience data sets

    KAUST Repository

    Jeong, Wonki

    2010-05-01

    Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system. © 2010 IEEE.

  14. Using lanthanoid complexes to phase large macromolecular assemblies

    International Nuclear Information System (INIS)

    Talon, Romain; Kahn, Richard; Durá, M. Asunción; Maury, Olivier; Vellieux, Frédéric M. D.; Franzetti, Bruno; Girard, Eric

    2011-01-01

    A lanthanoid complex, [Eu(DPA) 3 ] 3− , was used to obtain experimental phases at 4.0 Å resolution of PhTET1-12s, a large self-compartmentalized homo-dodecameric protease complex of 444 kDa. Lanthanoid ions exhibit extremely large anomalous X-ray scattering at their L III absorption edge. They are thus well suited for anomalous diffraction experiments. A novel class of lanthanoid complexes has been developed that combines the physical properties of lanthanoid atoms with functional chemical groups that allow non-covalent binding to proteins. Two structures of large multimeric proteins have already been determined by using such complexes. Here the use of the luminescent europium tris-dipicolinate complex [Eu(DPA) 3 ] 3− to solve the low-resolution structure of a 444 kDa homododecameric aminopeptidase, called PhTET1-12s from the archaea Pyrococcus horikoshii, is reported. Surprisingly, considering the low resolution of the data, the experimental electron density map is very well defined. Experimental phases obtained by using the lanthanoid complex lead to maps displaying particular structural features usually observed in higher-resolution maps. Such complexes open a new way for solving the structure of large molecular assemblies, even with low-resolution data

  15. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  16. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  17. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  18. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  19. Oxidation behaviour of titanium in high temperature steam

    International Nuclear Information System (INIS)

    Moroishi, Taishi; Shida, Yoshiaki

    1978-01-01

    The oxidation of pure titanium was studied in superheated steam at 400 -- 550 0 C. The effects of prior cold working and several heat treatment conditions on the oxidation were examined and also the effects of the addition of small amounts of iron and oxygen were investigated. The oxidation mechanism of pure titanium is discussed in relation to the scale structure and the oxidation kinetics. Hydrogen absorption rate was also measured. As a result, the following conclusions were drawn: (1) The oxidation of pure titanium in steam was faster than in air and breakaway oxidation was observed above 500 0 C after the specimen had gained a certain weight. Prior cold working and heat treatment conditions scarcely affected the oxidation rate, whereas the specimen containing small amounts of iron and oxygen showed a little more rapid oxidation. (2) At 500 and 550 0 C a dark grey inner scale and a yellow-brown outer scale were formed. The outer scale was apt to exfoliate after the occurrence of breakaway oxidation. At 400 and 450 0 C only a dark grey scale was observed. All of these oxides were identified as the rutile type, TiO 2 . Furthermore, the presence of a thin and uniform oxygen rich layer beneath the external scale was confirmed at all test temperatures. (3) The measured weight gain approximately followed the cubic rate law; this would be expected for the following reason; one component of the weight gain is due to the dissolved oxygen, the amount of which remains constant after the early stages of oxidation. The second component is due to the parabolic growth of the external TiO 2 scale. When these contributions are added a pseudo-cubic weight gain curve results. (4) It was shown that 50 percent of the hydrogen generated during the oxidation was absorbed into the metal. (auth.)

  20. Large-scale grid-enabled lattice-Boltzmann simulations of complex fluid flow in porous media and under shear

    NARCIS (Netherlands)

    Harting, J.D.R.; Venturoli, M.; Coveney, P.V.

    2004-01-01

    Well–designed lattice Boltzmann codes exploit the essentially embarrassingly parallel features of the algorithm and so can be run with considerable efficiency on modern supercomputers. Such scalable codes permit us to simulate the behaviour of increasingly large quantities of complex condensed

  1. Anodization: a promising nano-modification technique of titanium implants for orthopedic applications.

    Science.gov (United States)

    Yao, Chang; Webster, Thomas J

    2006-01-01

    Anodization is a well-established surface modification technique that produces protective oxide layers on valve metals such as titanium. Many studies have used anodization to produce micro-porous titanium oxide films on implant surfaces for orthopedic applications. An additional hydrothermal treatment has also been used in conjunction with anodization to deposit hydroxyapatite on titanium surfaces; this is in contrast to using traditional plasma spray deposition techniques. Recently, the ability to create nanometer surface structures (e.g., nano-tubular) via anodization of titanium implants in fluorine solutions have intrigued investigators to fabricate nano-scale surface features that mimic the natural bone environment. This paper will present an overview of anodization techniques used to produce micro-porous titanium oxide structures and nano-tubular oxide structures, subsequent properties of these anodized titanium surfaces, and ultimately their in vitro as well as in vivo biological responses pertinent for orthopedic applications. Lastly, this review will emphasize why anodized titanium structures that have nanometer surface features enhance bone forming cell functions.

  2. [Corrosion resistant properties of different anodized microtopographies on titanium surfaces].

    Science.gov (United States)

    Fangjun, Huo; Li, Xie; Xingye, Tong; Yueting, Wang; Weihua, Guo; Weidong, Tian

    2015-12-01

    To investigate the corrosion resistant properties of titanium samples prepared by anodic oxidation with different surface morphologies. Pure titanium substrates were treated by anodic oxidation to obtain porous titanium films in micron, submicron, and micron-submicron scales. The surface morphologies, coating cross-sectional morphologies, crystalline structures, and surface roughness of these samples were characterized. Electrochemical technique was used to measure the corrosion potential (Ecorr), current density of corrosion (Icorr), and polarization resistance (Rp) of these samples in a simulated body fluid. Pure titanium could be modified to exhibit different surface morphologies by the anodic oxidation technique. The Tafel curve results showed that the technique can improve the corrosion resistance of pure titanium. Furthermore, the corrosion resistance varied with different surface morphologies. The submicron porous surface sample demonstrated the best corrosion resistance, with maximal Ecorr and Rp and minimal Icorr. Anodic oxidation technology can improve the corrosion resistance of pure titanium in a simulated body fluid. The submicron porous surface sample exhibited the best corrosion resistance because of its small surface area and thick barrier layer.

  3. The spectra of type IIB flux compactifications at large complex structure

    International Nuclear Information System (INIS)

    Brodie, Callum; Marsh, M.C. David

    2016-01-01

    We compute the spectra of the Hessian matrix, H, and the matrix M that governs the critical point equation of the low-energy effective supergravity, as a function of the complex structure and axio-dilaton moduli space in type IIB flux compactifications at large complex structure. We find both spectra analytically in an h − 1,2 +3 real-dimensional subspace of the moduli space, and show that they exhibit a universal structure with highly degenerate eigenvalues, independently of the choice of flux, the details of the compactification geometry, and the number of complex structure moduli. In this subspace, the spectrum of the Hessian matrix contains no tachyons, but there are also no critical points. We show numerically that the spectra of H and M remain highly peaked over a large fraction of the sampled moduli space of explicit Calabi-Yau compactifications with 2 to 5 complex structure moduli. In these models, the scale of the supersymmetric contribution to the scalar masses is strongly linearly correlated with the value of the superpotential over almost the entire moduli space, with particularly strong correlations arising for g s <1. We contrast these results with the expectations from the much-used continuous flux approximation, and comment on the applicability of Random Matrix Theory to the statistical modelling of the string theory landscape.

  4. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  5. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  6. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  7. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  8. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  9. Sealing glasses for titanium and titanium alloys

    Science.gov (United States)

    Brow, Richard K.; McCollister, Howard L.; Phifer, Carol C.; Day, Delbert E.

    1997-01-01

    Barium lanthanoborate sealing-glass compositions are provided comprising various combinations (in terms of mole-%) of boron oxide (B.sub.2 O.sub.3), barium oxide (BaO), lanthanum oxide (La.sub.2 O.sub.3), and at least one other oxide selected from the group consisting of aluminum oxide (Al.sub.2 O.sub.3), calcium oxide (CaO), lithium oxide (Li.sub.2 O), sodium oxide (Na.sub.2 O), silicon dioxide (SiO.sub.2), or titanium dioxide (TiO.sub.2). These sealing-glass compositions are useful for forming hermetic glass-to-metal seals with titanium and titanium alloys having an improved aqueous durability and favorable sealing characteristics. Examples of the sealing-glass compositions are provided having coefficients of thermal expansion about that of titanium or titanium alloys, and with sealing temperatures less than about 900.degree. C., and generally about 700.degree.-800.degree. C. The barium lanthanoborate sealing-glass compositions are useful for components and devices requiring prolonged exposure to moisture or water, and for implanted biomedical devices (e.g. batteries, pacemakers, defibrillators, pumps).

  10. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  11. Study of multi-functional precision optical measuring system for large scale equipment

    Science.gov (United States)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  12. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  13. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  14. Surface modification of titanium and titanium alloys by ion implantation.

    Science.gov (United States)

    Rautray, Tapash R; Narayanan, R; Kwon, Tae-Yub; Kim, Kyo-Han

    2010-05-01

    Titanium and titanium alloys are widely used in biomedical devices and components, especially as hard tissue replacements as well as in cardiac and cardiovascular applications, because of their desirable properties, such as relatively low modulus, good fatigue strength, formability, machinability, corrosion resistance, and biocompatibility. However, titanium and its alloys cannot meet all of the clinical requirements. Therefore, to improve the biological, chemical, and mechanical properties, surface modification is often performed. In view of this, the current review casts new light on surface modification of titanium and titanium alloys by ion beam implantation. (c) 2010 Wiley Periodicals, Inc.

  15. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  16. Gentamicin-Eluting Titanium Dioxide Nanotubes Grown on the Ultrafine-Grained Titanium.

    Science.gov (United States)

    Nemati, Sima Hashemi; Hadjizadeh, Afra

    2017-08-01

    Titanium (Ti)-based materials is the most appropriate choices for the applications as orthopedic and dental implants. In this regard, ultrafine-grained (UFG) titanium with an enhanced mechanical properties and surface energy has attracted more attention. Titanium dioxide (TiO 2 ) nanotubes grown on the titanium could enhance bone bonding, cellular response and are good reservoirs for loading drugs and antibacterial agents. This article investigates gentamicin loading into and release from the TiO 2 nanotubes, grown on the UFG compared to coarse-grained (CG) titanium substrate surfaces. Equal Channel Angular Pressing (ECAP) was employed to produce the UFG structure titanium. TiO 2 nanotubes were grown by the anodizing technique on both UFG and CG titanium substrate surfaces. Scanning electron microscopy (SEM) imaging confirmed TiO 2 nanotube growth on the surface. The UV-vis spectroscopy analysis results show that the amount of gentamicin load-release in the anodized UFG titanium sample is higher than that of CG one which can be explained in terms of thicker TiO 2 nanotube arrays layer formed on UFG sample. Moreover, the anodized UFG titanium samples released the drug in a longer time than CG (1 day for the UFG titanium vs. 3 h for the CG one). Regarding wettability analysis, anodized UFG titanium sample showed more enhanced hydrophilicity than CG counterpart. Therefore, the significantly smaller grain size of pure titanium provided by the ECAP technique coupled with appropriate subsequent anodization treatment not only offers a good combination of biocompatibility and adequate mechanical properties but also it provides a delayed release condition for gentamicin.

  17. Modified projective synchronization with complex scaling factors of uncertain real chaos and complex chaos

    International Nuclear Information System (INIS)

    Zhang Fang-Fang; Liu Shu-Tang; Yu Wei-Yong

    2013-01-01

    To increase the variety and security of communication, we present the definitions of modified projective synchronization with complex scaling factors (CMPS) of real chaotic systems and complex chaotic systems, where complex scaling factors establish a link between real chaos and complex chaos. Considering all situations of unknown parameters and pseudo-gradient condition, we design adaptive CMPS schemes based on the speed-gradient method for the real drive chaotic system and complex response chaotic system and for the complex drive chaotic system and the real response chaotic system, respectively. The convergence factors and dynamical control strength are added to regulate the convergence speed and increase robustness. Numerical simulations verify the feasibility and effectiveness of the presented schemes. (general)

  18. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  19. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  20. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  1. Environmental effects in titanium aluminide alloys

    International Nuclear Information System (INIS)

    Thompson, A.W.

    1991-01-01

    Environmental effects on titanium aluminide alloys are potentially of great importance for engineering applications of these materials, although little has been published to date on such effects. The primary emphasis in this paper is on hydrogen effects, with a brief reference to oxygen effects. Hydrogen is readily absorbed at elevated temperature into all the titanium aluminide compositions studied to date, in amounts as large as 10 at.%, and on cooling virtually all this hydrogen is precipitated as a hydride phase or phases. The presence of these precipitated hydride plates affects mechanical properties in ways similar to what is observed in other hydride forming materials, although effects per unit volume of hydride are not particularly severe in the titanium aluminides. Microstructure, and thus thermal and mechanical history, plays a major role in controlling the severity of hydrogen effects

  2. Spectroscopy and titanium gettering in SPHEX

    International Nuclear Information System (INIS)

    Cunningham, G.; Giroud, C.; Summers, H.; Commission of the European Communities, Abingdon

    1994-01-01

    SPHEX is a spheromak wherein the toroidal and poloidal currents are generated and sustained by direct current injection from a Marshall gun, and organised by the effect of magnetic relaxation. In the past it has not achieved high temperature (Langmuir probes indicate a flat profile of about 20 eV), and this was thought to be due to radiation from impurities originating in the Marshall gun. For this paper, titanium has been applied to the plasma-facing surface of the flux conserver in an attempt to reduce impurity levels and plasma density. Calibrated spectrometers were used to measure plasma properties and impurity levels, both before and after application of titanium. The titanium is also found to have a surprisingly large effect on the magnetic properties, which gives some evidence regarding the relaxation mechanism. 7 refs., 2 figs., 1 tab

  3. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  4. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  5. Network Partitioning Domain Knowledge Multiobjective Application Mapping for Large-Scale Network-on-Chip

    Directory of Open Access Journals (Sweden)

    Yin Zhen Tei

    2014-01-01

    Full Text Available This paper proposes a multiobjective application mapping technique targeted for large-scale network-on-chip (NoC. As the number of intellectual property (IP cores in multiprocessor system-on-chip (MPSoC increases, NoC application mapping to find optimum core-to-topology mapping becomes more challenging. Besides, the conflicting cost and performance trade-off makes multiobjective application mapping techniques even more complex. This paper proposes an application mapping technique that incorporates domain knowledge into genetic algorithm (GA. The initial population of GA is initialized with network partitioning (NP while the crossover operator is guided with knowledge on communication demands. NP reduces the large-scale application mapping complexity and provides GA with a potential mapping search space. The proposed genetic operator is compared with state-of-the-art genetic operators in terms of solution quality. In this work, multiobjective optimization of energy and thermal-balance is considered. Through simulation, knowledge-based initial mapping shows significant improvement in Pareto front compared to random initial mapping that is widely used. The proposed knowledge-based crossover also shows better Pareto front compared to state-of-the-art knowledge-based crossover.

  6. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  7. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  8. A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project

    Science.gov (United States)

    Ewers, Robert M.; Didham, Raphael K.; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D.; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L.; Turner, Edgar C.

    2011-01-01

    Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project represents an advance on existing experiments in that it: (i) allows discrimination of the effects of landscape-level forest cover from patch-level processes; (ii) is designed to facilitate the unification of a wide range of data types on ecological patterns and processes that operate over a wide range of spatial scales; (iii) has greater replication than existing experiments; (iv) incorporates an experimental manipulation of riparian corridors; and (v) embeds the experimentally fragmented landscape within a wider gradient of land-use intensity than do existing projects. The SAFE Project represents an opportunity for ecologists across disciplines to participate in a large initiative designed to generate a broad understanding of the ecological impacts of tropical forest modification. PMID:22006969

  9. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  10. A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project.

    Science.gov (United States)

    Ewers, Robert M; Didham, Raphael K; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L; Turner, Edgar C

    2011-11-27

    Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project represents an advance on existing experiments in that it: (i) allows discrimination of the effects of landscape-level forest cover from patch-level processes; (ii) is designed to facilitate the unification of a wide range of data types on ecological patterns and processes that operate over a wide range of spatial scales; (iii) has greater replication than existing experiments; (iv) incorporates an experimental manipulation of riparian corridors; and (v) embeds the experimentally fragmented landscape within a wider gradient of land-use intensity than do existing projects. The SAFE Project represents an opportunity for ecologists across disciplines to participate in a large initiative designed to generate a broad understanding of the ecological impacts of tropical forest modification.

  11. Electrodeposition of amine-terminatedpoly(ethylene glycol) to titanium surface

    International Nuclear Information System (INIS)

    Tanaka, Yuta; Doi, Hisashi; Iwasaki, Yasuhiko; Hiromoto, Sachiko; Yoneyama, Takayuki; Asami, Katsuhiko; Imai, Hachiro; Hanawa, Takao

    2007-01-01

    The immobilization of poly(ethylene glycol), PEG, to a solid surface is useful to functionalize the surface, e.g., to prevent the adsorption of proteins. No successful one-stage technique for the immobilization of PEG to base metals has ever been developed. In this study, PEG in which both terminals or one terminal had been modified with amine bases was immobilized onto a titanium surface using electrodeposition. PEG was dissolved in a NaCl solution, and electrodeposition was carried out at 310 K with - 5 V for 300 min. The thickness of the deposited PEG layer was evaluated using ellipsometry, and the bonding manner of PEG to the titanium surface was characterized using X-ray photoelectron spectroscopy after electrodeposition. The results indicated that a certain amount of PEG was adsorbed on titanium through both electrodeposition and immersion when PEG was terminated by amine. However, terminated amines existed at the surface of titanium and were combined with titanium oxide as N-HO by electrodeposition, while amines randomly existed in the molecule and showed an ionic bond with titanium oxide by immersion. The electrodeposition of PEG was effective for the inhibition of albumin adsorption. This process is useful for materials that have electroconductivity and a complex morphology

  12. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  13. The PREP Pipeline: Standardized preprocessing for large-scale EEG analysis

    Directory of Open Access Journals (Sweden)

    Nima eBigdelys Shamlo

    2015-06-01

    Full Text Available The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

  14. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  15. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  16. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  17. An efficient hybrid, nanostructured, epoxidation catalyst: titanium silsesquioxane-polystyrene copolymer supported on SBA-15

    NARCIS (Netherlands)

    Santen, van R.A.; Zhang, Lei; Abbenhuis, H.C.L.; Gerritsen, G.; Ní Bhriain, N.M.; Magusin, P.C.M.M.; Mezari, B.; Han, W.; Yang, Q.; Li, Can

    2007-01-01

    A novel interfacial hybrid epoxidation catalyst was designed with a new immobilization method for homogeneous catalysts by coating an inorganic support with an organic polymer film containing active sites. The titanium silsesquioxane (TiPOSS) complex, which contains a single-site titanium active

  18. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  19. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  20. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  1. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  2. Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Zuoyong Xiang

    2015-01-01

    Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.

  3. Factors influencing catalytic behavior of titanium complexes bearing bisphenolate ligands toward ring-opening polymerization of L-lactide and ε-caprolactone

    Directory of Open Access Journals (Sweden)

    M-T. Jiang

    2018-02-01

    Full Text Available A series of titanium complexes bearing substituted diphenolate ligands (RCH(phenolate2, where R = H, CH3, o-OTs-phenyl, o-F-phenyl, o-OMe-phenyl, 2,4-OMe-phenyl was synthesized and studied as catalysts for the ring opening polymerization of L-lactide and ε-caprolactone. Ligands were designed to probe the role of chelate effect and steric effect in the catalytic performance. From the structure of triphenolate (with one extra coordination site than diphenolate ligand Ti complex, TriOTiOiPr2, we found no additional chelation to influence the catalytic activity of Ti complexes. It was found that bulky aryl groups in the diphenolate ligands decreased the rate of polymerization most. We conclude that steric effect is the most controlling factor in these polymerization reactions by using Ti complexes bearing diphenolate ligands as catalysts since it is responsible for the exclusion of needed space for incoming monomer by the bulky substituents on the catalyst.

  4. In Vitro Phototoxicity and Hazard Identification of Nano-scale Titanium Dioxide

    Science.gov (United States)

    Nano-titanium dioxide (nano-Ti02) catalyzes many reactions under UV radiation and is hypothesized to cause phototoxicity. A human-derived line of retinal pigment epithelial cells (ARPE-19) was treated with six different samples of nano-Ti02 and exposed to UVA radiation. The Ti02 ...

  5. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    Science.gov (United States)

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  6. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  7. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  8. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  9. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  10. Innovation Processes in Large-Scale Public Foodservice-Case Findings from the Implementation of Organic Foods in a Danish County

    DEFF Research Database (Denmark)

    Mikkelsen, Bent Egberg; Nielsen, Thorkild; Kristensen, Niels Heine

    2005-01-01

    is the idea that the large-scale foodservice such as hospital food service should adopt a buy organic policy due to their large buying volume. But whereas implementation of organic foods has developed quite unproblematically in smaller institutions such as kindergartens and nurseries, introduction of organic...... foods into large-scale foodservice such as that taking place in hospitals and larger homes for the elderly, has proven to be quite difficult. The very complex planning, procurement and processing procedures used in such facilities are among reasons for this. Against this background an evaluation...

  11. Discriminant WSRC for Large-Scale Plant Species Recognition

    Directory of Open Access Journals (Sweden)

    Shanwen Zhang

    2017-01-01

    Full Text Available In sparse representation based classification (SRC and weighted SRC (WSRC, it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  12. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  13. Stationarity of resonant pole trajectories in complex scaling

    International Nuclear Information System (INIS)

    Canuto, S.; Goscinski, O.

    1978-01-01

    A reciprocity theorem relating the real parameters eta and α that define the complex scaling transformation r → eta r e/sup iα/ in the theory of complex scaling for resonant states is demonstrated. The virial theorem is used in connection with the stationarity of the pole trajectory. The Stark broadening in the hydrogen atom using a basis set generated by Rayleigh--Schroedinger perturbation theory is treated as an example. 18 references

  14. Low-temperature atmospheric oxidation of mixtures of titanium and carbon black or brown

    International Nuclear Information System (INIS)

    Elizarova, V.A.; Babaitsev, I.V.; Barzykin, V.V.; Gerusova, V.P.; Rozenband, V.I.

    1984-01-01

    This article reports on the thermogravimetric investigation of mixtures of titanium no. 2 and carbon black with various mass carbon contents. Adding carbon black (as opposed to boron) to titanium leads to an increase in the rate of heat release of the oxidation reaction. An attempt is made to clarify the low-temperature oxidation mechanism of titanium mixtures in air. An x-ray phase and chemical (for bound carbon) analysis of specimens of a stoichiometric Ti + C mixture after heating in air to a temperature of 650 0 C at the rate of 10 0 /min was conducted. The results indicate that the oxidation of the titanium-carbon mixture probably proceeds according to a more complex mechanism associated with the transport of the gaseous carbon oxidation products and their participation in the titanium oxidation

  15. Oxidation behaviour of titanium in high temperature steam

    Energy Technology Data Exchange (ETDEWEB)

    Moroishi, T; Shida, Y [Sumitomo Metal Industries Ltd., Amagasaki, Hyogo (Japan). Central Research Labs.

    1978-03-01

    The oxidation of pure titanium was studied in superheated steam at 400 -- 550/sup 0/C. The effects of prior cold working and several heat treatment conditions on the oxidation were examined and also the effects of the addition of small amounts of iron and oxygen were investigated. The oxidation mechanism of pure titanium is discussed in relation to the scale structure and the oxidation kinetics. Hydrogen absorption rate was also measured. As a result, the following conclusions were drawn: (1) The oxidation of pure titanium in steam was faster than in air and breakaway oxidation was observed above 500/sup 0/C after the specimen had gained a certain weight. Prior cold working and heat treatment conditions scarcely affected the oxidation rate, whereas the specimen containing small amounts of iron and oxygen showed a little more rapid oxidation. (2) At 500 and 550/sup 0/C a dark grey inner scale and a yellow-brown outer scale were formed. The outer scale was apt to exfoliate after the occurrence of breakaway oxidation. At 400 and 450/sup 0/C only a dark grey scale was observed. All of these oxides were identified as the rutile type, TiO/sub 2/. Furthermore, the presence of a thin and uniform oxygen rich layer beneath the external scale was confirmed at all test temperatures. (3) The measured weight gain approximately followed the cubic rate law; this would be expected for the following reason; one component of the weight gain is due to the dissolved oxygen, the amount of which remains constant after the early stages of oxidation. The second component is due to the parabolic growth of the external TiO/sub 2/ scale. When these contributions are added a pseudo-cubic weight gain curve results. (4) It was shown that 50 percent of the hydrogen generated during the oxidation was absorbed into the metal.

  16. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  17. Present status of titanium removable dentures--a review of the literature.

    Science.gov (United States)

    Ohkubo, C; Hanatani, S; Hosoi, T

    2008-09-01

    Although porcelain and zirconium oxide might be used for fixed partial dental prostheses instead of conventional dental metals in the near future, removable partial denture (RPD) frameworks will probably continue to be cast with biocompatible metals. Commercially pure (CP) titanium has appropriate mechanical properties, it is lightweight (low density) compared with conventional dental alloys, and has outstanding biocompatibility that prevents metal allergic reactions. This literature review describes the laboratory conditions needed for fabricating titanium frameworks and the present status of titanium removable prostheses. The use of titanium for the production of cast RPD frameworks has gradually increased. There are no reports about metallic allergy apparently caused by CP titanium dentures. The laboratory drawbacks still remain, such as the lengthy burn-out, inferior castability and machinability, reaction layer formed on the cast surface, difficulty of polishing, and high initial costs. However, the clinical problems, such as discoloration of the titanium surfaces, unpleasant metal taste, decrease of clasp retention, tendency for plaque to adhere to the surface, detachment of the denture base resin, and severe wear of titanium teeth, have gradually been resolved. Titanium RPD frameworks have never been reported to fail catastrophically. Thus, titanium is recommended as protection against metal allergy, particularly for large-sized prostheses such as RPDs or complete dentures.

  18. Mechanical compatibility of sol-gel annealing with titanium for orthopaedic prostheses.

    Science.gov (United States)

    Greer, Andrew I M; Lim, Teoh S; Brydone, Alistair S; Gadegaard, Nikolaj

    2016-01-01

    Sol-gel processing is an attractive method for large-scale surface coating due to its facile and inexpensive preparation, even with the inclusion of precision nanotopographies. These are desirable traits for metal orthopaedic prostheses where ceramic coatings are known to be osteoinductive and the effects may be amplified through nanotexturing. However there are a few concerns associated with the application of sol-gel technology to orthopaedics. Primarily, the annealing stage required to transform the sol-gel into a ceramic may compromise the physical integrity of the underlying metal. Secondly, loose particles on medical implants can be carcinogenic and cause inflammation so the coating needs to be strongly bonded to the implant. These concerns are addressed in this paper. Titanium, the dominant material for orthopaedics at present, is examined before and after sol-gel processing for changes in hardness and flexural modulus. Wear resistance, bending and pull tests are also performed to evaluate the ceramic coating. The findings suggest that sol-gel coatings will be compatible with titanium implants for an optimum temperature of 500 °C.

  19. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  20. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  1. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  2. Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions

    International Nuclear Information System (INIS)

    Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor

    2004-01-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)

  3. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  4. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  5. Development of technology for the large-scale preparation of 60Co polymer film source

    International Nuclear Information System (INIS)

    Udhayakumar, J.; Pardeshi, G.S.; Gandhi, Shymala S.; Chakravarty, Rubel; Kumar, Manoj; Dash, Ashutosh; Venkatesh, Meera

    2008-01-01

    60 Co sources (∼37 kBq) in the form of a thin film are widely used in position identification of perforation in offshore oil-well explorations. This paper describes the large-scale preparation of such sources using a radioactive polymer containing 60 Co. 60 Co was extracted into chloroform containing 8-hydroxyquinoline. The chloroform layer was mixed with polymethyl methacrylate (PMMA) polymer. A large film was prepared using the polymer solution containing the complex. The polymer film was then cut into circular sources, mounted on a source holder and supplied to various users

  6. Laser colouring on titanium alloys: characterisation and potential applications

    OpenAIRE

    Franceschini, Federica; Demir, Ali Gökhan; Dowding, Colin; Previtali, Barbara; Griffiths, Jonathan David

    2014-01-01

    Oxides of titanium exhibit vivid colours that can be generated naturally or manipulated through controlled oxidation processes. The application of a laser beam for colouring titanium permits flexible manipulation of the oxidized geometry with high spatial resolution. The laser-based procedure can be applied in an ambient atmosphere to generate long-lasting coloured marks. Today, these properties are largely exploited in artistic applications such as jewellery, eyewear frames, watch components...

  7. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  8. Adaptive Scaling of Cluster Boundaries for Large-Scale Social Media Data Clustering.

    Science.gov (United States)

    Meng, Lei; Tan, Ah-Hwee; Wunsch, Donald C

    2016-12-01

    The large scale and complex nature of social media data raises the need to scale clustering techniques to big data and make them capable of automatically identifying data clusters with few empirical settings. In this paper, we present our investigation and three algorithms based on the fuzzy adaptive resonance theory (Fuzzy ART) that have linear computational complexity, use a single parameter, i.e., the vigilance parameter to identify data clusters, and are robust to modest parameter settings. The contribution of this paper lies in two aspects. First, we theoretically demonstrate how complement coding, commonly known as a normalization method, changes the clustering mechanism of Fuzzy ART, and discover the vigilance region (VR) that essentially determines how a cluster in the Fuzzy ART system recognizes similar patterns in the feature space. The VR gives an intrinsic interpretation of the clustering mechanism and limitations of Fuzzy ART. Second, we introduce the idea of allowing different clusters in the Fuzzy ART system to have different vigilance levels in order to meet the diverse nature of the pattern distribution of social media data. To this end, we propose three vigilance adaptation methods, namely, the activation maximization (AM) rule, the confliction minimization (CM) rule, and the hybrid integration (HI) rule. With an initial vigilance value, the resulting clustering algorithms, namely, the AM-ART, CM-ART, and HI-ART, can automatically adapt the vigilance values of all clusters during the learning epochs in order to produce better cluster boundaries. Experiments on four social media data sets show that AM-ART, CM-ART, and HI-ART are more robust than Fuzzy ART to the initial vigilance value, and they usually achieve better or comparable performance and much faster speed than the state-of-the-art clustering algorithms that also do not require a predefined number of clusters.

  9. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  10. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  11. Titanium metal: extraction to application

    Energy Technology Data Exchange (ETDEWEB)

    Gambogi, Joseph (USGS, Reston, VA); Gerdemann, Stephen J.

    2002-09-01

    In 1998, approximately 57,000 tons of titanium metal was consumed in the form of mill products (1). Only about 5% of the 4 million tons of titanium minerals consumed each year is used to produce titanium metal, with the remainder primarily used to produce titanium dioxide pigment. Titanium metal production is primarily based on the direct chlorination of rutile to produce titanium tetrachloride, which is then reduced to metal using the Kroll magnesium reduction process. The use of titanium is tied to its high strength-to-weight ratio and corrosion resistance. Aerospace is the largest application for titanium. In this paper, we discuss all aspects of the titanium industry from ore deposits through extraction to present and future applications. The methods of both primary (mining of ore, extraction, and purification) and secondary (forming and machining) operations will be analyzed. The chemical and physical properties of titanium metal will be briefly examined. Present and future applications for titanium will be discussed. Finally, the economics of titanium metal production also are analyzed as well as the advantages and disadvantages of various alternative extraction methods.

  12. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  13. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  14. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  15. Upscaling of Large-Scale Transport in Spatially Heterogeneous Porous Media Using Wavelet Transformation

    Science.gov (United States)

    Moslehi, M.; de Barros, F.; Ebrahimi, F.; Sahimi, M.

    2015-12-01

    Modeling flow and solute transport in large-scale heterogeneous porous media involves substantial computational burdens. A common approach to alleviate this complexity is to utilize upscaling methods. These processes generate upscaled models with less complexity while attempting to preserve the hydrogeological properties comparable to the original fine-scale model. We use Wavelet Transformations (WT) of the spatial distribution of aquifer's property to upscale the hydrogeological models and consequently transport processes. In particular, we apply the technique to a porous formation with broadly distributed and correlated transmissivity to verify the performance of the WT. First, transmissivity fields are coarsened using WT in such a way that the high transmissivity zones, in which more important information is embedded, mostly remain the same, while the low transmissivity zones are averaged out since they contain less information about the hydrogeological formation. Next, flow and non-reactive transport are simulated in both fine-scale and upscaled models to predict both the concentration breakthrough curves at a control location and the large-scale spreading of the plume around its centroid. The results reveal that the WT of the fields generates non-uniform grids with an average of 2.1% of the number of grid blocks in the original fine-scale models, which eventually leads to a significant reduction in the computational costs. We show that the upscaled model obtained through the WT reconstructs the concentration breakthrough curves and the spreading of the plume at different times accurately. Furthermore, the impacts of the Hurst coefficient, size of the flow domain and the orders of magnitude difference in transmissivity values on the results have been investigated. It is observed that as the heterogeneity and the size of the domain increase, better agreement between the results of fine-scale and upscaled models can be achieved. Having this framework at hand aids

  16. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  17. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  18. Unusually large erupted complex odontoma: A rare case report

    Energy Technology Data Exchange (ETDEWEB)

    Bagewadi, Shivanand B.; Kukreja, Rahul; Suma, Gundareddy N.; Yadav, Bhawn; Sharma, Havi [Dept. of Oral Medicine and Radiology, ITS Centre for Dental Studies and Research, Murad Nagar (India)

    2015-03-15

    Odontomas are nonaggressive, hamartomatous developmental malformations composed of mature tooth substances and may be compound or complex depending on the extent of morphodifferentiation or on their resemblance to normal teeth. Among them, complex odontomas are relatively rare tumors. They are usually asymptomatic in nature. Occasionally, these tumors become large, causing bone expansion followed by facial asymmetry. Odontoma eruptions are uncommon, and thus far, very few cases of erupted complex odontomas have been reported in the literature. Here, we report the case of an unusually large, painless, complex odontoma located in the right posterior mandible.

  19. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  20. Research and assessment of competitiveness of large engineering complexes

    Directory of Open Access Journals (Sweden)

    Krivorotov V.V.

    2017-01-01

    Full Text Available The urgency of the problem of ensuring the competitiveness of manufacturing and high-tech sectors is shown. Substantiated the decisive role of the large industrial complexes in the formation of the results of the national economy; the author’s interpretation of the concept of “industrial complex” with regard to current economic systems. Current approaches to assessing the competitiveness of enterprises and industrial complexes are analyzed; showing their main advantages and disadvantages. Provides scientific-methodological approach to the study and management of competitiveness of a large industrial complex; the description of its main units is provided. As a Central element of the scientific methodology approach proposed the methodology for assessing the competitiveness of a large industrial complex based on the Pattern-method; a modular system of indicators of competitiveness is developed and its adaptation to a large engineering complexes is made. Using the developed methodology the competitiveness of one of the largest engineering complexes of the group of companies Uralelectrotyazhmash, which is the leading enterprises in electrotechnical industry of Russia is assessed. The evaluation identified the main problems and bottlenecks in the development of these enterprises, and their comparison with leading competitors is provided. According to the results of the study the main conclusions and recommendations are formed.

  1. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  2. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  3. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  4. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  5. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  6. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  7. Investment casting of beta titanium alloys for aerospace applications

    International Nuclear Information System (INIS)

    Wheeler, D.A.; Cianci, M.S.; Vogt, R.G.

    1993-01-01

    The process of investment casting offers the ability to produce complex titanium components with minimal finish machining, thereby reducing their overall manufacturing cost. While aerospace applications for cast titanium have focused primarily on alpha+beta alloys, recent interest in higher strength beta alloys has prompted an examination of their suitability for investment casting. In this paper, the processing characteristics and mechanical proper-ties of Ti-1 5V-3Cr-3Al-3Sn, Ti-3Al-8V-6Cr-4Mo-4Zr, and Ti-15Mo-3Nb-3Al-0.2Si (wt.%) will be discussed. It will be shown that all three alloy compositions are readily processed using only slight modifications from current Ti-6Al-4V (wt.%) production operations. In addition, the mechanical properties of the cast product form can be manipulated through heat treatment and compare quite favorably with typical properties obtained in wrought beta titanium products. Finally, several demonstration castings are reviewed which illustrate the shape-making capabilities of the investment casting approach for beta titanium alloys

  8. Assembling large, complex environmental metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Howe, A. C. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Jansson, J. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Malfatti, S. A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tringe, S. G. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tiedje, J. M. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Brown, C. T. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Computer Science and Engineering

    2012-12-28

    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more computationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.

  9. Leaching of Titanium and Silicon from Low-Grade Titanium Slag Using Hydrochloric Acid Leaching

    Science.gov (United States)

    Zhao, Longsheng; Wang, Lina; Qi, Tao; Chen, Desheng; Zhao, Hongxin; Liu, Yahui; Wang, Weijing

    2018-05-01

    Acid-leaching behaviors of the titanium slag obtained by selective reduction of vanadium-bearing titanomagnetite concentrates were investigated. It was found that the optimal leaching of titanium and silicon were 0.7% and 1.5%, respectively. The titanium and silicon in the titanium slag were firstly dissolved in the acidic solution to form TiO2+ and silica sol, and then rapidly reprecipitated, forming hydrochloric acid (HCl) leach residue. Most of the silicon presented in the HCl leach residue as floccules-like silica gel, while most of the titanium was distributed in the nano-sized rod-like clusters with crystallite refinement and intracrystalline defects, and, as such, 94.3% of the silicon was leached from the HCl leach residue by alkaline desilication, and 96.5% of the titanium in the titanium-rich material with some rutile structure was then digested by the concentrated sulfuric acid. This provides an alternative route for the comprehensive utilization of titanium and silicon in titanium slag.

  10. The Sustainable Improvement of Manufacturing for Nano-Titanium

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2016-04-01

    Full Text Available Scientists have found that nanomaterials possess many outstanding features in their tiny grain structure compared to other common materials. Titanium at the nano-grain scale shows many novel characteristics which demonstrate suitability for use in surgical implants. In general, equal channel angular pressing (ECAP is the most popular and simple process to produce nano-titanium. However, ECAP is time-consuming, power-wasting, and insufficiently produces the ultrafine grain structure. Therefore, the objective of this research is to propose a new method to improve the ECAP’s performances to reach the ultrafine grain structure, and also to save production costs, based on the innovation theory of Teoriya Resheniya Izobreatatelskih Zadatch (TRIZ. Research results show that the process time is reduced by 80%, and 94% of the energy is saved. Moreover, the grain size of the diameter for nano-titanium can be reduced from 160 nanometers (nm to 80 nm. The results are a 50% reduction of diameter and a 75% improvement of volume. At the same time, the method creates a refined grain size and good mechanical properties in the nano-titanium. The proposed method can be applied to produce any nanomaterial as well as biomaterials.

  11. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  12. Spectroscopic studies on Titanium ion binding to the apo lactoferrin

    International Nuclear Information System (INIS)

    Moshtaghie, A.A.; Ani, M.; Arabi, M.H.

    2006-01-01

    Titanium is a relatively abundant element that has found growing applications in medical science and recently some of Titanium compounds are introduced as anticancer drugs. In spite of very limited data which exist on the Titanium metabolism, some proteins might be involved in the mechanism of action of Titanium up to our knowledge, there is not any report in the literature concerning binding of Titanium to apo lactoferrin. Binding of apo lactoferrin with Ti(IV)-citrate was studied by spectroflourimeterey and spectrophotometery techniques under physiological conditions. The spectroflourimeteric studies revealed a significant fluorescence quenching, that indicated binding of apo lactoferrin with Ti(IV). The same reaction was monitored through spectrophotometry technique; this represents a characteristic UV difference band at 267 nm, which is different from lac-Fe (III). Titration studies how that lactoferrin specifically binds two moles Ti(IV) as complex with citrate per mol protein. Spectroflourimeterey and spectrophotometery techniques indicated that Ti(IV) ions cause a reduction (13%-14%) in binding of Fe(III) to lactoferrin. In overall, we may come to this conclusion that this element might be involved in the iron metabolism

  13. Titanium and titanium alloys: fundamentals and applications

    National Research Council Canada - National Science Library

    Leyens, C; Peters, M

    2003-01-01

    ... number of titanium alloys have paved the way for light metals to vastly expand into many industrial applications. Titanium and its alloys stand out primarily due to their high specific strength and excellent corrosion resistance, at just half the weight of steels and Ni-based superalloys. This explains their early success in the aerospace and the...

  14. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  15. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  16. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  17. Inhibitors for the corrosion of reactive metals: titanium and zirconium and their alloys in acid media

    International Nuclear Information System (INIS)

    Petit, J.A.; Chatainier, G.; Dabosi, F.

    1981-01-01

    The search for effective corrosion inhibitors for titanium and zirconium in acid media is growing because of the considerable increase in the use of these materials in chemical process equipment. It still remains limited, as appears from this review, because of the exceptionally high corrosion resistance of the metals. Titanium has received the greater attention. Its corrosion rate can be lowered by introduction in the medium of multivalent ions, inorganic and organic oxidants. Care should be taken to hold the concentration at a level exceeding some critical value, otherwise the corrosion rate increases. Complexing organic agents do not show such hazardous behaviour. The very rapid corrosion of titanium and zirconium in fluoride media may be lessened by complexing the fluoride ions. Though rarely encountered, localized corrosion may be avoided by using inhibitors. In some cases good corrosion inhibitors for titanium are dissolution accelerators for zirconium. (author)

  18. Biopolitics problems of large-scale hydraulic engineering construction

    International Nuclear Information System (INIS)

    Romanenko, V.D.

    1997-01-01

    The XX century which will enter in a history as a century of large-scale hydraulic engineering constructions come to the finish. Only on the European continent 517 large reservoirs (more than 1000 million km 3 of water were detained, had been constructed for a period from 1901 till 1985. In the Danube basin a plenty for reservoirs of power stations, navigations, navigating sluices and other hydraulic engineering structures are constructed. Among them more than 40 especially large objects are located along the main bed of the river. A number of hydro-complexes such as Dnieper-Danube and Gabcikovo, Danube-Oder-Labe (project), Danube-Tissa, Danube-Adriatic Sea (project), Danube-Aegean Sea, Danube-Black Sea ones, are entered into operation or are in a stage of designing. Hydraulic engineering construction was especially heavily conducted in Ukraine. On its territory some large reservoirs on Dnieper and Yuzhny Bug were constructed, which have heavily changed the hydrological regime of the rivers. Summarised the results of river systems regulating in Ukraine one can be noted that more than 27 thousand ponds (3 km 3 per year), 1098 reservoirs of total volume 55 km 3 , 11 large channels of total length more than 2000 km and with productivity of 1000 m 2 /s have been created in Ukraine. Hydraulic engineering construction played an important role in development of the industry and agriculture, water-supply of the cities and settlements, in environmental effects, and maintenance of safe navigation in Danube, Dnieper and other rivers. In next part of the paper, the environmental changes after construction of the Karakum Channel on the Aral Sea in the Middle Asia are discussed

  19. Large branched self-assembled DNA complexes

    International Nuclear Information System (INIS)

    Tosch, Paul; Waelti, Christoph; Middelberg, Anton P J; Davies, A Giles

    2007-01-01

    Many biological molecules have been demonstrated to self-assemble into complex structures and networks by using their very efficient and selective molecular recognition processes. The use of biological molecules as scaffolds for the construction of functional devices by self-assembling nanoscale complexes onto the scaffolds has recently attracted significant attention and many different applications in this field have emerged. In particular DNA, owing to its inherent sophisticated self-organization and molecular recognition properties, has served widely as a scaffold for various nanotechnological self-assembly applications, with metallic and semiconducting nanoparticles, proteins, macromolecular complexes, inter alia, being assembled onto designed DNA scaffolds. Such scaffolds may typically contain multiple branch-points and comprise a number of DNA molecules selfassembled into the desired configuration. Previously, several studies have used synthetic methods to produce the constituent DNA of the scaffolds, but this typically constrains the size of the complexes. For applications that require larger self-assembling DNA complexes, several tens of nanometers or more, other techniques need to be employed. In this article, we discuss a generic technique to generate large branched DNA macromolecular complexes

  20. Synthesis of Titanium Oxycarbide from Titanium Slag by Methane-Containing Gas

    Science.gov (United States)

    Dang, Jie; Fatollahi-Fard, Farzin; Pistorius, Petrus Christiaan; Chou, Kuo-Chih

    2018-02-01

    In this study, reaction steps of a process for synthesis of titanium oxycarbide from titanium slag were demonstrated. This process involves the reduction of titanium slag by a methane-hydrogen-argon mixture at 1473 K (1200 °C) and the leaching of the reduced products by hydrofluoric acid near room temperature to remove the main impurity (Fe3Si). Some iron was formed by disproportionation of the main M3O5 phase before gaseous reduction started. Upon reduction, more iron formed first, followed by reduction of titanium dioxide to suboxides and eventually oxycarbide.

  1. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  2. Selective laser melting-produced porous titanium scaffolds regenerate bone in critical size cortical bone defects.

    Science.gov (United States)

    Van der Stok, Johan; Van der Jagt, Olav P; Amin Yavari, Saber; De Haas, Mirthe F P; Waarsing, Jan H; Jahr, Holger; Van Lieshout, Esther M M; Patka, Peter; Verhaar, Jan A N; Zadpoor, Amir A; Weinans, Harrie

    2013-05-01

    Porous titanium scaffolds have good mechanical properties that make them an interesting bone substitute material for large bone defects. These scaffolds can be produced with selective laser melting, which has the advantage of tailoring the structure's architecture. Reducing the strut size reduces the stiffness of the structure and may have a positive effect on bone formation. Two scaffolds with struts of 120-µm (titanium-120) or 230-µm (titanium-230) were studied in a load-bearing critical femoral bone defect in rats. The defect was stabilized with an internal plate and treated with titanium-120, titanium-230, or left empty. In vivo micro-CT scans at 4, 8, and 12 weeks showed more bone in the defects treated with scaffolds. Finally, 18.4 ± 7.1 mm(3) (titanium-120, p = 0.015) and 18.7 ± 8.0 mm(3) (titanium-230, p = 0.012) of bone was formed in those defects, significantly more than in the empty defects (5.8 ± 5.1 mm(3) ). Bending tests on the excised femurs after 12 weeks showed that the fusion strength reached 62% (titanium-120) and 45% (titanium-230) of the intact contralateral femurs, but there was no significant difference between the two scaffolds. This study showed that in addition to adequate mechanical support, porous titanium scaffolds facilitate bone formation, which results in high mechanical integrity of the treated large bone defects. Copyright © 2012 Orthopaedic Research Society.

  3. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  4. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  5. Titanium K-Shell X-Ray Production from High Velocity Wire Arrays Implosions on the 20-MA Z Accelerator

    International Nuclear Information System (INIS)

    Apruzese, J.P.; Beg, F.N.; Clark, R.C.; Coverdale, C.A.; Davis, J.; Deeney, C.; Douglas, M.R.; Nash, T.J.; Ruiz-Comacho, J.; Spielman, R.B.; Struve, K.W.; Thornhill, J.W.; Whitney, K.G.

    1999-01-01

    The advent of the 20-MA Z accelerator [R.B. Spielman, C. Deeney, G.A. Chandler, et al., Phys. Plasmas 5, 2105, (1997)] has enabled implosions of large diameter, high-wire-number arrays of titanium to begin testing Z-pinch K-shell scaling theories. The 2-cm long titanium arrays, which were mounted on a 40-mm diameter, produced between 75±15 to 125±20 kJ of K-shell x-rays. Mass scans indicate that, as predicted, higher velocity implosions in the series produced higher x-ray yields. Spectroscopic analyses indicate that these high velocity implosions achieved peak electron temperatures from 2.7±0.1 to 3.2±0.2 keV and obtained a K-shell emission mass participation of up to 12%

  6. An innovative large scale integration of silicon nanowire-based field effect transistors

    Science.gov (United States)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  7. UV photofunctionalization promotes nano-biomimetic apatite deposition on titanium

    Directory of Open Access Journals (Sweden)

    Saita M

    2016-01-01

    Full Text Available Makiko Saita,1 Takayuki Ikeda,1,2 Masahiro Yamada,1,3 Katsuhiko Kimoto,4 Masaichi Chang-Il Lee,5 Takahiro Ogawa1 1Division of Advanced Prosthodontics, Weintraub Center for Reconstructive Biotechnology, UCLA School of Dentistry, Los Angeles, CA, USA; 2Department of Complete Denture Prosthodontics, Nihon University School of Dentistry, Yokosuka, Japan; 3Division of Molecular and Regenerative Prosthodontics, Tohoku University Graduate School of Dentistry, Sendai, Miyagi, Japan; 4Department of Prosthodontics and Oral Rehabilitation, 5Yokosuka-Shonan Disaster Health Emergency Research Center and ESR Laboratories, Kanagawa Dental University Graduate School of Dentistry, Yokosuka, Japan Background: Although biomimetic apatite coating is a promising way to provide titanium with osteoconductivity, the efficiency and quality of deposition is often poor. Most titanium implants have microscale surface morphology, and an addition of nanoscale features while preserving the micromorphology may provide further biological benefit. Here, we examined the effect of ultraviolet (UV light treatment of titanium, or photofunctionalization, on the efficacy of biomimetic apatite deposition on titanium and its biological capability.Methods and results: Micro-roughed titanium disks were prepared by acid-etching with sulfuric acid. Micro-roughened disks with or without photofunctionalization (20-minute exposure to UV light were immersed in simulated body fluid (SBF for 1 or 5 days. Photofunctionalized titanium disks were superhydrophilic and did not form surface air bubbles when immersed in SBF, whereas non-photofunctionalized disks were hydrophobic and largely covered with air bubbles during immersion. An apatite-related signal was observed by X-ray diffraction on photofunctionalized titanium after 1 day of SBF immersion, which was equivalent to the one observed after 5 days of immersion of control titanium. Scanning electron microscopy revealed nodular apatite deposition

  8. Pulse-radiolytic investigation of the reduction of titanium(III) ions in aqueous solutions

    International Nuclear Information System (INIS)

    Micic, O.I.; Nenadovic, M.T.

    1979-01-01

    The absorption spectrum and decay kinetics of intermediates formed by the reaction of titanium(III) ions with H atoms, hydrated electrons, and carboxyl radicals have been studied in aqueous solution using the pulse-radiolysis technique. The product of the reaction with H atoms in acid solution is a Ti 3+ -H hydride intermediate which decomposes by a first-order process with a half-life of ca. 3 s. Titanium(II) is formed by reaction with hydrated electrons and CO 2 H radicals. The absorption spectrum of titanium(II) and the kinetics of its reactions are reported and discussed. The formation of molecular hydrogen by reaction of Ti 2+ with water is suppressed by the other solutes in the solutions. Titanium(III) reacts with CO 2 H, CH 2 CO 2 H, and CH(CO 2 H) 2 radicals to give titanium-radical complexes. (author)

  9. Carbonate effects on hexavalent uranium removal from water by nanocrystalline titanium dioxide

    International Nuclear Information System (INIS)

    Wazne, Mahmoud; Meng, Xiaoguang; Korfiatis, George P.; Christodoulatos, Christos

    2006-01-01

    A novel nanocrystalline titanium dioxide was used to treat depleted uranium (DU)-contaminated water under neutral and alkaline conditions. The novel material had a total surface area of 329 m 2 /g, total surface site density of 11.0 sites/nm 2 , total pore volume of 0.415 cm 3 /g and crystallite size of 6.0 nm. It was used in batch tests to remove U(VI) from synthetic solutions and contaminated water. However, the capacity of the nanocrystalline titanium dioxide to remove U(VI) from water decreased in the presence of inorganic carbonate at pH > 6.0. Adsorption isotherms, Fourier transform infrared (FTIR) spectroscopy, and surface charge measurements were used to investigate the causes of the reduced capacity. The surface charge and the FTIR measurements suggested that the adsorbed U(VI) species was not complexed with carbonate at neutral pH values. The decreased capacity of titanium dioxide to remove U(VI) from water in the presence of carbonate at neutral to alkaline pH values was attributed to the aqueous complexation of U(VI) by inorganic carbonate. The nanocrystalline titanium dioxide had four times the capacity of commercially available titanium dixoide (Degussa P-25) to adsorb U(VI) from water at pH 6 and total inorganic carbonate concentration of 0.01 M. Consequently, the novel material was used to treat DU-contaminated water at a Department of Defense (DOD) site

  10. Understanding the mechanical response of built-up welded beams made from commercially pure titanium and a titanium alloy

    Energy Technology Data Exchange (ETDEWEB)

    Patnaik, Anil K., E-mail: Patnaik@uakron.edu [Department of Civil Engineering, The University of Akron, Akron, OH 44325 (United States); Poondla, Narendra [Department of Civil Engineering, The University of Akron, Akron, OH 44325 (United States); Department of Mechanical Engineering, The University of Akron, Akron, OH 44325 (United States); Menzemer, Craig C. [Department of Civil Engineering, The University of Akron, Akron, OH 44325 (United States); Srivatsan, T.S. [Department of Mechanical Engineering, The University of Akron, Akron, OH 44325 (United States)

    2014-01-10

    During the last two decades, titanium has gradually grown in stature, strength and significance to take on the recognition of being a modern and high performance metal that is noticeably stronger and concurrently lighter than the most widely chosen and used steels in a spectrum of industrial applications. Technological innovations have necessitated reduction of part weight, cost and lead time, including concurrent enhancement of performance of structural parts and components made using titanium and its alloys. This has provided the impetus to develop economically viable structural design methodologies and specifications, while at the same time bringing forth innovative and economically affordable manufacturing and fabricating techniques with the primary purpose of both producing and promoting the use of cost-effective titanium structures. The experimental results of a recent study on built-up welded beams are presented in this paper with the primary objective of enabling design, facilitating fabrication, and implementation of large structural members for potential applications in the structural and defense-industry.

  11. Surface-Induced Hybridization between Graphene and Titanium

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Allen L. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Koch, Roland J. [Technische Universitat, Chemnitz (Germany); Ong, Mitchell T. [Stanford Univ., CA (United States); Fang, Wenjing [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States); Hofmann, Mario [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States); Kim, Ki Kang [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States).; Seyller, Thomas [Technische Universitat, Chemnitz (Germany); Dresselhaus, Mildred S. [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States); Reed, Evan J. [Stanford Univ., CA (United States); Kong, Jing [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States); Palacios, Tomás [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States)

    2014-08-26

    Carbon-based materials such as graphene sheets and carbon nanotubes have inspired a broad range of applications ranging from high-speed flexible electronics all the way to ultrastrong membranes. However, many of these applications are limited by the complex interactions between carbon-based materials and metals. In this work, we experimentally investigate the structural interactions between graphene and transition metals such as palladium (Pd) and titanium (Ti), which have been confirmed by density functional simulations. We find that the adsorption of titanium on graphene is more energetically favorable than in the case of most metals, and density functional theory shows that a surface induced p-d hybridization occurs between atomic carbon and titanium orbitals. This strong affinity between the two materials results in a short-range ordered crystalline deposition on top of graphene as well as chemical modifications to graphene as seen by Raman and X-ray photoemission spectroscopy (XPS). This induced hybridization is interface-specific and has major consequences for contacting graphene nanoelectronic devices as well as applications toward metal-induced chemical functionalization of graphene.

  12. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...

  13. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  14. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  15. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  16. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  17. Titanium Analysis of Ilmenite Bangka by Spectrophotometric Method Using Perhidrol

    International Nuclear Information System (INIS)

    Rusydi-S

    2004-01-01

    Determination of titanium by spectrofotometric method using perhydrol has been done. The purpose of experiment is to find out the condition of titanium analysis of ilmenite by spectrophotometric method. The experiment parameter e c. analysis condition include the Ti-perhydrol complex spectrum, acidity of complex, perhydrol concentration, linearity, influence of anion, limit detection, interfere elements, application of the method using 40 SRM Standard and analysis for ilmenite samples. Result of experiments are spectrum of Ti-perhydrol complex at 410 nm, acidity of complex is the H 2 SO 4 2 M, perhydrol concentration is 0,24 %, the curve calibration is linier at 1-80 ppm, anion of PO 4 , NO 3 , Cl, CO 3 not be influence until 2000 ppm, limit detection is 0,30 ppm and Mo, V is interfere. The method which was aplicated to 40 SRM standard, shown that the content of Ti is 1580 ppm while the standard is 1600 ppm, and the content of Ti from ilmenite high grade samples is 31,46 % and low grade is 13,55 %. (author)

  18. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  19. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  20. Corrosion of titanium and titanium alloys in spent fuel repository conditions - literature review

    International Nuclear Information System (INIS)

    Aho-Mantila, I.; Haenninen, H.; Aaltonen, P.; Taehtinen, S.

    1985-03-01

    The spent nuclear fuel is planned to be disposed in Finnish bedrock. The canister of spent fuel in waste repository is one barrier to the release of radionuclides. It is possible to choose a canister material with a known, measurable corrosion rate and to make it with thickness allowing corrosion to occur. The other possibility is to use a material which is nearly immune to general corrosion. In this second category there are titanium and titanium alloys which exhibit a very high degree of resistance to general corrosion. In this literature study the corrosion properties of unalloyed titanium, titanium alloyed with palladium and titanium alloyed with molybdenum and nickel are reviewed. The two titanium alloys own in addition to the excellent general corrosion properties outstanding properties against localized corrosion like pitting or crevice corrosion. Stress corrosion cracking and corrosion fatique of titanium seem not to be a problem in the repository conditions, but the possibilities of delayed cracking caused by hydrogen should be carefully appreciated. (author)

  1. Synthesis of titanium oxide nanoparticles using DNA-complex as template for solution-processable hybrid dielectric composites

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, J.C. [Center for Sustainable Materials Chemistry, 153 Gilbert Hall, Oregon State University, Corvallis, OR (United States); Mejia, I.; Murphy, J.; Quevedo, M. [Department of Materials Science and Engineering, University of Texas at Dallas, Dallas, TX (United States); Garcia, P.; Martinez, C.A. [Engineering and Technology Institute, Autonomous University of Ciudad Juarez, Ciudad Juarez, Chihuahua (Mexico)

    2015-09-15

    Highlights: • We developed a synthesis method to produce TiO{sub 2} nanoparticles using a DNA complex. • The nanoparticles were anatase phase (~6 nm diameter), and stable in alcohols. • Composites showed a k of 13.4, 4.6 times larger than the k of polycarbonate. • Maximum processing temperature was 90 °C. • Low temperature enables their use in low-voltage, low-cost, flexible electronics. - Abstract: We report the synthesis of TiO{sub 2} nanoparticles prepared by the hydrolysis of titanium isopropoxide (TTIP) in the presence of a DNA complex for solution processable dielectric composites. The nanoparticles were incorporated as fillers in polycarbonate at low concentrations (1.5, 5 and 7 wt%) to produce hybrid dielectric films with dielectric constant higher than thermally grown silicon oxide. It was found that the DNA complex plays an important role as capping agent in the formation and suspension stability of nanocrystalline anatase phase TiO{sub 2} at room temperature with uniform size (∼6 nm) and narrow distribution. The effective dielectric constant of spin-cast polycarbonate thin-films increased from 2.84 to 13.43 with the incorporation of TiO{sub 2} nanoparticles into the polymer host. These composites can be solution processed with a maximum temperature of 90 °C and could be potential candidates for its application in low-cost macro-electronics.

  2. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  3. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  4. A route to explosive large-scale magnetic reconnection in a super-ion-scale current sheet

    Directory of Open Access Journals (Sweden)

    K. G. Tanaka

    2009-01-01

    Full Text Available How to trigger magnetic reconnection is one of the most interesting and important problems in space plasma physics. Recently, electron temperature anisotropy (αeo=Te⊥/Te|| at the center of a current sheet and non-local effect of the lower-hybrid drift instability (LHDI that develops at the current sheet edges have attracted attention in this context. In addition to these effects, here we also study the effects of ion temperature anisotropy (αio=Ti⊥/Ti||. Electron anisotropy effects are known to be helpless in a current sheet whose thickness is of ion-scale. In this range of current sheet thickness, the LHDI effects are shown to weaken substantially with a small increase in thickness and the obtained saturation level is too low for a large-scale reconnection to be achieved. Then we investigate whether introduction of electron and ion temperature anisotropies in the initial stage would couple with the LHDI effects to revive quick triggering of large-scale reconnection in a super-ion-scale current sheet. The results are as follows. (1 The initial electron temperature anisotropy is consumed very quickly when a number of minuscule magnetic islands (each lateral length is 1.5~3 times the ion inertial length form. These minuscule islands do not coalesce into a large-scale island to enable large-scale reconnection. (2 The subsequent LHDI effects disturb the current sheet filled with the small islands. This makes the triggering time scale to be accelerated substantially but does not enhance the saturation level of reconnected flux. (3 When the ion temperature anisotropy is added, it survives through the small island formation stage and makes even quicker triggering to happen when the LHDI effects set-in. Furthermore the saturation level is seen to be elevated by a factor of ~2 and large-scale reconnection is achieved only in this case. Comparison with two-dimensional simulations that exclude the LHDI effects confirms that the saturation level

  5. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  6. Niobium Titanium and Copper wire samples

    CERN Multimedia

    2009-01-01

    Two wire samples, both for carrying 13'000Amperes. I sample is copper. The other is the Niobium Titanium wiring used in the LHC magnets. The high magnetic fields needed for guiding particles around the Large Hadron Collider (LHC) ring are created by passing 12’500 amps of current through coils of superconducting wiring. At very low temperatures, superconductors have no electrical resistance and therefore no power loss. The LHC is the largest superconducting installation ever built. The magnetic field must also be extremely uniform. This means the current flowing in the coils has to be very precisely controlled. Indeed, nowhere before has such precision been achieved at such high currents. Magnet coils are made of copper-clad niobium–titanium cables — each wire in the cable consists of 9’000 niobium–titanium filaments ten times finer than a hair. The cables carry up to 12’500 amps and must withstand enormous electromagnetic forces. At full field, the force on one metre of magnet is comparable ...

  7. Global-Scale Hydrology: Simple Characterization of Complex Simulation

    Science.gov (United States)

    Koster, Randal D.

    1999-01-01

    Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.

  8. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    Science.gov (United States)

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  9. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  10. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  11. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  12. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  13. Improved decomposition–coordination and discrete differential dynamic programming for optimization of large-scale hydropower system

    International Nuclear Information System (INIS)

    Li, Chunlong; Zhou, Jianzhong; Ouyang, Shuo; Ding, Xiaoling; Chen, Lu

    2014-01-01

    Highlights: • Optimization of large-scale hydropower system in the Yangtze River basin. • Improved decomposition–coordination and discrete differential dynamic programming. • Generating initial solution randomly to reduce generation time. • Proposing relative coefficient for more power generation. • Proposing adaptive bias corridor technology to enhance convergence speed. - Abstract: With the construction of major hydro plants, more and more large-scale hydropower systems are taking shape gradually, which brings up a challenge to optimize these systems. Optimization of large-scale hydropower system (OLHS), which is to determine water discharges or water levels of overall hydro plants for maximizing total power generation when subjecting to lots of constrains, is a high dimensional, nonlinear and coupling complex problem. In order to solve the OLHS problem effectively, an improved decomposition–coordination and discrete differential dynamic programming (IDC–DDDP) method is proposed in this paper. A strategy that initial solution is generated randomly is adopted to reduce generation time. Meanwhile, a relative coefficient based on maximum output capacity is proposed for more power generation. Moreover, an adaptive bias corridor technology is proposed to enhance convergence speed. The proposed method is applied to long-term optimal dispatches of large-scale hydropower system (LHS) in the Yangtze River basin. Compared to other methods, IDC–DDDP has competitive performances in not only total power generation but also convergence speed, which provides a new method to solve the OLHS problem

  14. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  15. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  16. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Biros, George [Univ. of Texas, Austin, TX (United States)

    2018-01-12

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a

  17. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  18. Fast and accurate detection of spread source in large complex networks.

    Science.gov (United States)

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  19. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  20. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek; Verma, Mahendra K.; Sukhatme, Jai

    2017-01-01

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  1. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek

    2017-01-11

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  2. Three-dimensional nanometer scale analyses of precipitate structures and local compositions in titanium aluminide engineering alloys

    Science.gov (United States)

    Gerstl, Stephan S. A.

    Titanium aluminide (TiAl) alloys are among the fastest developing class of materials for use in high temperature structural applications. Their low density and high strength make them excellent candidates for both engine and airframe applications. Creep properties of TiAl alloys, however, have been a limiting factor in applying the material to a larger commercial market. In this research, nanometer scale compositional and structural analyses of several TiAl alloys, ranging from model Ti-Al-C ternary alloys to putative commercial alloys with 10 components are investigated utilizing three dimensional atom probe (3DAP) and transmission electron microscopies. Nanometer sized borides, silicides, and carbide precipitates are involved in strengthening TiAl alloys, however, chemical partitioning measurements reveal oxygen concentrations up to 14 at. % within the precipitate phases, resulting in the realization of oxycarbide formation contributing to the precipitation strengthening of TiAl alloys. The local compositions of lamellar microstructures and a variety of precipitates in the TiAl system, including boride, silicide, binary carbides, and intermetallic carbides are investigated. Chemical partitioning of the microalloying elements between the alpha2/gamma lamellar phases, and the precipitate/gamma-matrix phases are determined. Both W and Hf have been shown to exhibit a near interfacial excess of 0.26 and 0.35 atoms nm-2 respectively within ca. 7 nm of lamellar interfaces in a complex TiAl alloy. In the case of needle-shaped perovskite Ti3AlC carbide precipitates, periodic domain boundaries are observed 5.3+/-0.8 nm apart along their growth axis parallel to the TiAl[001] crystallographic direction with concomitant composition variations after 24 hrs. at 800°C.

  3. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  4. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Directory of Open Access Journals (Sweden)

    Sundeep Teki

    Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  5. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  6. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  7. How do the multiple large-scale climate oscillations trigger extreme precipitation?

    Science.gov (United States)

    Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu

    2017-10-01

    Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.

  8. Large scale three-dimensional topology optimisation of heat sinks cooled by natural convection

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Sigmund, Ole; Aage, Niels

    2016-01-01

    the Bousinessq approximation. The fully coupled non-linear multiphysics system is solved using stabilised trilinear equal-order finite elements in a parallel framework allowing for the optimisation of large scale problems with order of 20-330 million state degrees of freedom. The flow is assumed to be laminar...... topologies verify prior conclusions regarding fin length/thickness ratios and Biot numbers, but also indicate that carefully tailored and complex geometries may improve cooling behaviour considerably compared to simple heat fin geometries. (C) 2016 Elsevier Ltd. All rights reserved....

  9. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  10. Classification of titanium dioxide

    International Nuclear Information System (INIS)

    Macias B, L.R.; Garcia C, R.M.; Maya M, M.E.; Ita T, A. De; Palacios G, J.

    2002-01-01

    In this work the X-ray diffraction (XRD), Scanning Electron Microscopy (Sem) and the X-ray Dispersive Energy Spectroscopy techniques are used with the purpose to achieve a complete identification of phases and mixture of phases of a crystalline material as titanium dioxide. The problem for solving consists of being able to distinguish a sample of titanium dioxide being different than a titanium dioxide pigment. A standard sample of titanium dioxide with NIST certificate is used, which indicates a purity of 99.74% for the TiO 2 . The following way is recommended to proceed: a)To make an analysis by means of X-ray diffraction technique to the sample of titanium dioxide pigment and on the standard of titanium dioxide waiting not find differences. b) To make a chemical analysis by the X-ray Dispersive Energy Spectroscopy via in a microscope, taking advantage of the high vacuum since it is oxygen which is analysed and if it is concluded that the aluminium oxide appears in a greater proportion to 1% it is established that is a titanium dioxide pigment, but if it is lesser then it will be only titanium dioxide. This type of analysis is an application of the nuclear techniques useful for the tariff classification of merchandise which is considered as of difficult recognition. (Author)

  11. Large-scale preparation of hollow graphitic carbon nanospheres

    International Nuclear Information System (INIS)

    Feng, Jun; Li, Fu; Bai, Yu-Jun; Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning; Lu, Xi-Feng

    2013-01-01

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 °C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g −1 after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 °C, which exhibit superior electrochemical performance to graphite. Highlights: ► Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 °C ► The preparation is simple, effective and eco-friendly. ► The in situ yielded MgO nanocrystals promote the graphitization. ► The HGCNSs exhibit superior electrochemical performance to graphite.

  12. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  13. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  14. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  15. Scaling up complex interventions: insights from a realist synthesis.

    Science.gov (United States)

    Willis, Cameron D; Riley, Barbara L; Stockton, Lisa; Abramowicz, Aneta; Zummach, Dana; Wong, Geoff; Robinson, Kerry L; Best, Allan

    2016-12-19

    Preventing chronic diseases, such as cancer, cardiovascular disease and diabetes, requires complex interventions, involving multi-component and multi-level efforts that are tailored to the contexts in which they are delivered. Despite an increasing number of complex interventions in public health, many fail to be 'scaled up'. This study aimed to increase understanding of how and under what conditions complex public health interventions may be scaled up to benefit more people and populations.A realist synthesis was conducted and discussed at an in-person workshop involving practitioners responsible for scaling up activities. Realist approaches view causality through the linkages between changes in contexts (C) that activate mechanisms (M), leading to specific outcomes (O) (CMO configurations). To focus this review, three cases of complex interventions that had been successfully scaled up were included: Vibrant Communities, Youth Build USA and Pathways to Education. A search strategy of published and grey literature related to each case was developed, involving searches of relevant databases and nominations from experts. Data extracted from included documents were classified according to CMO configurations within strategic themes. Findings were compared and contrasted with guidance from diffusion theory, and interpreted with knowledge users to identify practical implications and potential directions for future research.Four core mechanisms were identified, namely awareness, commitment, confidence and trust. These mechanisms were activated within two broad scaling up strategies, those of renewing and regenerating, and documenting success. Within each strategy, specific actions to change contexts included building partnerships, conducting evaluations, engaging political support and adapting funding models. These modified contexts triggered the identified mechanisms, leading to a range of scaling up outcomes, such as commitment of new communities, changes in relevant

  16. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  17. Renormalization Scale-Fixing for Complex Scattering Amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, Stanley J.; /SLAC; Llanes-Estrada, Felipe J.; /Madrid U.

    2005-12-21

    We show how to fix the renormalization scale for hard-scattering exclusive processes such as deeply virtual meson electroproduction by applying the BLM prescription to the imaginary part of the scattering amplitude and employing a fixed-t dispersion relation to obtain the scale-fixed real part. In this way we resolve the ambiguity in BLM renormalization scale-setting for complex scattering amplitudes. We illustrate this by computing the H generalized parton distribution at leading twist in an analytic quark-diquark model for the parton-proton scattering amplitude which can incorporate Regge exchange contributions characteristic of the deep inelastic structure functions.

  18. Corrosion behaviour and galvanic coupling of titanium and welded titanium in LiBr solutions

    International Nuclear Information System (INIS)

    Blasco-Tamarit, E.; Igual-Munoz, A.; Garcia Anton, J.; Garcia-Garcia, D.

    2007-01-01

    Corrosion resistance and galvanic coupling of Grade 2 commercially pure titanium in its welded and non-welded condition were systematically analyzed in LiBr solutions. Galvanic corrosion was evaluated through two different methods: anodic polarization (according to the Mixed Potential Theory) and electrochemical noise (using a zero-resistance ammeter). Samples have been etched to study the microstructure. The action of lithium chromate as corrosion inhibitor has been evaluated. Titanium and welded titanium showed extremely low corrosion current densities and elevated pitting potential values (higher than 1 V). The results of both methods, anodic polarization and electrochemical noise, showed that the welded titanium was always the anodic element of the pair titanium-welded titanium, so that its corrosion resistance decreases due to the galvanic effect

  19. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  20. Cell-laden hydrogel/titanium microhybrids: Site-specific cell delivery to metallic implants for improved integration.

    Science.gov (United States)

    Koenig, Geraldine; Ozcelik, Hayriye; Haesler, Lisa; Cihova, Martina; Ciftci, Sait; Dupret-Bories, Agnes; Debry, Christian; Stelzle, Martin; Lavalle, Philippe; Vrana, Nihal Engin

    2016-03-01

    Porous titanium implants are widely used in dental, orthopaedic and otorhinolaryngology fields to improve implant integration to host tissue. A possible step further to improve the integration with the host is the incorporation of autologous cells in porous titanium structures via cell-laden hydrogels. Fast gelling hydrogels have advantageous properties for in situ applications such as localisation of specific cells and growth factors at a target area without dispersion. The ability to control the cell types in different regions of an implant is important in applications where the target tissue (i) has structural heterogeneity (multiple cell types with a defined spatial configuration with respect to each other); (ii) has physical property gradients essential for its function (such as in the case of osteochondral tissue transition). Due to their near immediate gelation, such gels can also be used for site-specific modification of porous titanium structures, particularly for implants which would face different tissues at different locations. Herein, we describe a step by step design of a model system: the model cell-laden gel-containing porous titanium implants in the form of titanium microbead/hydrogel (maleimide-dextran or maleimide-PVA based) microhybrids. These systems enable the determination of the effect of titanium presence on gel properties and encapsulated cell behaviour as a miniaturized version of full-scale implants, providing a system compatible with conventional analysis methods. We used a fibroblast/vascular endothelial cell co-cultures as our model system and by utilising single microbeads we have quantified the effect of gel microenvironment (degradability, presence of RGD peptides within gel formulation) on cell behaviour and the effect of the titanium presence on cell behaviour and gel formation. Titanium presence slightly changed gel properties without hindering gel formation or affecting cell viability. Cells showed a preference to move towards

  1. Large scale electromechanical transistor with application in mass sensing

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Leisheng; Li, Lijie, E-mail: L.Li@swansea.ac.uk [Multidisciplinary Nanotechnology Centre, College of Engineering, Swansea University, Swansea SA2 8PP (United Kingdom)

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  2. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  4. Electrowinning molten titanium from titanium dioxide

    CSIR Research Space (South Africa)

    Van Vuuren, DS

    2005-10-01

    Full Text Available the Manufacturing and Materials Industry in it quest for global competitiveness CSIR Manufacturing and Materials Technology 3 Rationale – Titanium Cost Build-up Material Cost Ilmenite $0.27/kg Ti sponge Titanium slag $0.75/kg Ti Sponge TiCl4 and TiO2 $3....10/kg Ti Sponge Ti Sponge raw materials costs $5.50/kg Ti Sponge Total Ti Sponge cost $9-$11/kg Ti Sponge Ti ingot $15-17/kg Ti Aluminium $1.7/kg Al Supporting the Manufacturing and Materials Industry in its quest for global competitivenessorting...

  5. Trojan-Like Internalization of Anatase Titanium Dioxide Nanoparticles by Human Osteoblast Cells.

    Science.gov (United States)

    Ribeiro, A R; Gemini-Piperni, S; Travassos, R; Lemgruber, L; Silva, R C; Rossi, A L; Farina, M; Anselme, K; Shokuhfar, T; Shahbazian-Yassar, R; Borojevic, R; Rocha, L A; Werckmann, J; Granjeiro, J M

    2016-03-29

    Dentistry and orthopedics are undergoing a revolution in order to provide more reliable, comfortable and long-lasting implants to patients. Titanium (Ti) and titanium alloys have been used in dental implants and total hip arthroplasty due to their excellent biocompatibility. However, Ti-based implants in human body suffer surface degradation (corrosion and wear) resulting in the release of metallic ions and solid wear debris (mainly titanium dioxide) leading to peri-implant inflammatory reactions. Unfortunately, our current understanding of the biological interactions with titanium dioxide nanoparticles is still very limited. Taking this into consideration, this study focuses on the internalization of titanium dioxide nanoparticles on primary bone cells, exploring the events occurring at the nano-bio interface. For the first time, we report the selective binding of calcium (Ca), phosphorous (P) and proteins from cell culture medium to anatase nanoparticles that are extremely important for nanoparticle internalization and bone cells survival. In the intricate biological environment, anatase nanoparticles form bio-complexes (mixture of proteins and ions) which act as a kind of 'Trojan-horse' internalization by cells. Furthermore, anatase nanoparticles-induced modifications on cell behavior (viability and internalization) could be understand in detail. The results presented in this report can inspire new strategies for the use of titanium dioxide nanoparticles in several regeneration therapies.

  6. Ductile and brittle transition behavior of titanium alloys in ultra-precision machining.

    Science.gov (United States)

    Yip, W S; To, S

    2018-03-02

    Titanium alloys are extensively applied in biomedical industries due to their excellent material properties. However, they are recognized as difficult to cut materials due to their low thermal conductivity, which induces a complexity to their deformation mechanisms and restricts precise productions. This paper presents a new observation about the removal regime of titanium alloys. The experimental results, including the chip formation, thrust force signal and surface profile, showed that there was a critical cutting distance to achieve better surface integrity of machined surface. The machined areas with better surface roughness were located before the clear transition point, defining as the ductile to brittle transition. The machined area at the brittle region displayed the fracture deformation which showed cracks on the surface edge. The relationship between depth of cut and the ductile to brittle transaction behavior of titanium alloys in ultra-precision machining(UPM) was also revealed in this study, it showed that the ductile to brittle transaction behavior of titanium alloys occurred mainly at relatively small depth of cut. The study firstly defines the ductile to brittle transition behavior of titanium alloys in UPM, contributing the information of ductile machining as an optimal machining condition for precise productions of titanium alloys.

  7. Characterization of the titanium Kβ spectral profile

    International Nuclear Information System (INIS)

    Chantler, C T; Smale, L F; Kinnane, M N; Illig, A J; Kimpton, J A; Crosby, D N

    2013-01-01

    Transition metals have Kα and Kβ characteristic radiation possessing complex asymmetric spectral profiles. Instrumental broadening normally encountered in x-ray experiments shifts features of profiles used for calibration, such as peak energy, by many times the quoted accuracies. We measure and characterize the titanium Kβ spectral profile. The peak energy of the titanium Kβ spectral profile is found to be 4931.966 ± 0.022 eV prior to instrumental broadening. This 4.5 ppm result decreases the uncertainty over the past literature by a factor of 2.6 and is 2.4 standard deviations from the previous standard. The spectrum is analysed and the resolution-free lineshape is extracted and listed for use in other experiments. We also incorporate improvement in analysis applied to earlier results for V Kβ. (paper)

  8. Crustal-Scale Fault Interaction at Rifted Margins and the Formation of Domain-Bounding Breakaway Complexes: Insights From Offshore Norway

    Science.gov (United States)

    Osmundsen, P. T.; Péron-Pinvidic, G.

    2018-03-01

    The large-magnitude faults that control crustal thinning and excision at rifted margins combine into laterally persistent structural boundaries that separate margin domains of contrasting morphology and structure. We term them breakaway complexes. At the Mid-Norwegian margin, we identify five principal breakaway complexes that separate the proximal, necking, distal, and outer margin domains. Downdip and lateral interactions between the faults that constitute breakaway complexes became fundamental to the evolution of the 3-D margin architecture. Different types of fault interaction are observed along and between these faults, but simple models for fault growth will not fully describe their evolution. These structures operate on the crustal scale, cut large thicknesses of heterogeneously layered lithosphere, and facilitate fundamental margin processes such as deformation coupling and exhumation. Variations in large-magnitude fault geometry, erosional footwall incision, and subsequent differential subsidence along the main breakaway complexes likely record the variable efficiency of these processes.

  9. Surface Functionalization of Orthopedic Titanium Implants with Bone Sialoprotein.

    Directory of Open Access Journals (Sweden)

    Andreas Baranowski

    Full Text Available Orthopedic implant failure due to aseptic loosening and mechanical instability remains a major problem in total joint replacement. Improving osseointegration at the bone-implant interface may reduce micromotion and loosening. Bone sialoprotein (BSP has been shown to enhance bone formation when coated onto titanium femoral implants and in rat calvarial defect models. However, the most appropriate method of BSP coating, the necessary level of BSP coating, and the effect of BSP coating on cell behavior remain largely unknown. In this study, BSP was covalently coupled to titanium surfaces via an aminosilane linker (APTES, and its properties were compared to BSP applied to titanium via physisorption and untreated titanium. Cell functions were examined using primary human osteoblasts (hOBs and L929 mouse fibroblasts. Gene expression of specific bone turnover markers at the RNA level was detected at different intervals. Cell adhesion to titanium surfaces treated with BSP via physisorption was not significantly different from that of untreated titanium at any time point, whereas BSP application via covalent coupling caused reduced cell adhesion during the first few hours in culture. Cell migration was increased on titanium disks that were treated with higher concentrations of BSP solution, independent of the coating method. During the early phases of hOB proliferation, a suppressive effect of BSP was observed independent of its concentration, particularly when BSP was applied to the titanium surface via physisorption. Although alkaline phosphatase activity was reduced in the BSP-coated titanium groups after 4 days in culture, increased calcium deposition was observed after 21 days. In particular, the gene expression level of RUNX2 was upregulated by BSP. The increase in calcium deposition and the stimulation of cell differentiation induced by BSP highlight its potential as a surface modifier that could enhance the osseointegration of orthopedic implants

  10. Electron-helium scattering in the S-wave model using exterior complex scaling

    International Nuclear Information System (INIS)

    Horner, Daniel A.; McCurdy, C. William; Rescigno, Thomas N.

    2004-01-01

    Electron-impact excitation and ionization of helium is studied in the S-wave model. The problem is treated in full dimensionality using a time-dependent formulation of the exterior complex scaling method that does not involve the solution of large linear systems of equations. We discuss the steps that must be taken to compute stable ionization amplitudes. We present total excitation, total ionization and single differential cross sections from the ground and n=2 excited states and compare our results with those obtained by others using a frozen-core model

  11. Towards a Database System for Large-scale Analytics on Strings

    KAUST Repository

    Sahli, Majed A.

    2015-07-23

    Recent technological advances are causing an explosion in the production of sequential data. Biological sequences, web logs and time series are represented as strings. Currently, strings are stored, managed and queried in an ad-hoc fashion because they lack a standardized data model and query language. String queries are computationally demanding, especially when strings are long and numerous. Existing approaches cannot handle the growing number of strings produced by environmental, healthcare, bioinformatic, and space applications. There is a trade- off between performing analytics efficiently and scaling to thousands of cores to finish in reasonable times. In this thesis, we introduce a data model that unifies the input and output representations of core string operations. We define a declarative query language for strings where operators can be pipelined to form complex queries. A rich set of core string operators is described to support string analytics. We then demonstrate a database system for string analytics based on our model and query language. In particular, we propose the use of a novel data structure augmented by efficient parallel computation to strike a balance between preprocessing overheads and query execution times. Next, we delve into repeated motifs extraction as a core string operation for large-scale string analytics. Motifs are frequent patterns used, for example, to identify biological functionality, periodic trends, or malicious activities. Statistical approaches are fast but inexact while combinatorial methods are sound but slow. We introduce ACME, a combinatorial repeated motifs extractor. We study the spatial and temporal locality of motif extraction and devise a cache-aware search space traversal technique. ACME is the only method that scales to gigabyte- long strings, handles large alphabets, and supports interesting motif types with minimal overhead. While ACME is cache-efficient, it is limited by being serial. We devise a lightweight

  12. Size matters: the ethical, legal, and social issues surrounding large-scale genetic biobank initiatives

    Directory of Open Access Journals (Sweden)

    Klaus Lindgaard Hoeyer

    2012-04-01

    Full Text Available During the past ten years the complex ethical, legal and social issues (ELSI typically surrounding large-scale genetic biobank research initiatives have been intensely debated in academic circles. In many ways genetic epidemiology has undergone a set of changes resembling what in physics has been called a transition into Big Science. This article outlines consequences of this transition and suggests that the change in scale implies challenges to the roles of scientists and public alike. An overview of key issues is presented, and it is argued that biobanks represent not just scientific endeavors with purely epistemic objectives, but also political projects with social implications. As such, they demand clever maneuvering among social interests to succeed.

  13. Complex Quantum Network Manifolds in Dimension d > 2 are Scale-Free

    Science.gov (United States)

    Bianconi, Ginestra; Rahmede, Christoph

    2015-09-01

    In quantum gravity, several approaches have been proposed until now for the quantum description of discrete geometries. These theoretical frameworks include loop quantum gravity, causal dynamical triangulations, causal sets, quantum graphity, and energetic spin networks. Most of these approaches describe discrete spaces as homogeneous network manifolds. Here we define Complex Quantum Network Manifolds (CQNM) describing the evolution of quantum network states, and constructed from growing simplicial complexes of dimension . We show that in d = 2 CQNM are homogeneous networks while for d > 2 they are scale-free i.e. they are characterized by large inhomogeneities of degrees like most complex networks. From the self-organized evolution of CQNM quantum statistics emerge spontaneously. Here we define the generalized degrees associated with the -faces of the -dimensional CQNMs, and we show that the statistics of these generalized degrees can either follow Fermi-Dirac, Boltzmann or Bose-Einstein distributions depending on the dimension of the -faces.

  14. OSTEOCALCIN DINAMIC OF DISTROPHICAL BONE KISTS BY TITANIUM NIKELID POROUS MATERIALS IMPLANTATION IN CHILDREN

    OpenAIRE

    I. I. Kuzhelivsky; M. A. Akselrov; L. A. Sitko

    2015-01-01

    The article presents results of bone kists treatment by porous granular titanium nikelid materials and dynamic of osteokalcin. A comparative examination with standard treatment technology group demonstrated high efficiency of a proposed method. Porous granular titanium nikelid materials possess mechanical strength, optimization of regeneration at the expense of osteoinductivity by osteokalcin and allow you to effectively fill the cavity with a complex anatomical structure. 

  15. Nanocomposites based on thermoplastic elastomers with functional basis of nano titanium dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Yulovskaya, V. D.; Kuz’micheva, G. M., E-mail: galina-kuzmicheva@list.ru [Federal State Budget Educational Institution of Higher Education “Moscow Technological University” (Russian Federation); Klechkovskaya, V. V. [Russian Academy of Sciences, Shubnikov Institute of Crystallography (Russian Federation); Orekhov, A. S.; Zubavichus, Ya. V. [National Research Centre “Kurchatov Institute” (Russian Federation); Domoroshchina, E. N.; Shegay, A. V. [Federal State Budget Educational Institution of Higher Education “Moscow Technological University” (Russian Federation)

    2016-03-15

    Nanocomposites based on a thermoplastic elastomer (TPE) (low-density polyethylene (LDPE) and 1,2-polybutadiene in a ratio of 60/40) with functional titanium dioxide nanoparticles of different nature, TiO{sub 2}/TPE, have been prepared and investigated by a complex of methods (X-ray diffraction analysis using X-ray and synchrotron radiation beams, scanning electron microscopy, transmission electron microscopy, and X-ray energy-dispersive spectroscopy). The morphology of the composites is found to be somewhat different, depending on the TiO{sub 2} characteristics. It is revealed that nanocomposites with cellular or porous structures containing nano-TiO{sub 2} aggregates with a large specific surface and large sizes of crystallites and nanoparticles exhibit the best deformation‒strength and fatigue properties and stability to the effect of active media under conditions of ozone and vapor‒air aging.

  16. Complexity Analysis of Carbon Market Using the Modified Multi-Scale Entropy

    Directory of Open Access Journals (Sweden)

    Jiuli Yin

    2018-06-01

    Full Text Available Carbon markets provide a market-based way to reduce climate pollution. Subject to general market regulations, the major existing emission trading markets present complex characteristics. This paper analyzes the complexity of carbon market by using the multi-scale entropy. Pilot carbon markets in China are taken as the example. Moving average is adopted to extract the scales due to the short length of the data set. Results show a low-level complexity inferring that China’s pilot carbon markets are quite immature in lack of market efficiency. However, the complexity varies in different time scales. China’s carbon markets (except for the Chongqing pilot are more complex in the short period than in the long term. Furthermore, complexity level in most pilot markets increases as the markets developed, showing an improvement in market efficiency. All these results demonstrate that an effective carbon market is required for the full function of emission trading.

  17. Oxochloroalkoxide of the Cerium (IV and Titanium (IV as oxides precursor

    Directory of Open Access Journals (Sweden)

    Machado Luiz Carlos

    2002-01-01

    Full Text Available The Cerium (IV and Titanium (IV oxides mixture (CeO2-3TiO2 was prepared by thermal treatment of the oxochloroisopropoxide of Cerium (IV and Titanium (IV. The chemical route utilizing the Cerium (III chloride alcoholic complex and Titanium (IV isopropoxide is presented. The compound Ce5Ti15Cl16O30 (iOPr4(OH-Et15 was characterized by elemental analysis, FTIR and TG/DTG. The X-ray diffraction patterns of the oxides resulting from the thermal decomposition of the precursor at 1000 degreesC for 36 h indicated the formation of cubic cerianite (a = 5.417Å and tetragonal rutile (a = 4.592Å and (c = 2.962 Å, with apparent crystallite sizes around 38 and 55nm, respectively.

  18. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  19. Research on development and application of titanium and zirconium alloys

    International Nuclear Information System (INIS)

    Suzuki, Toshiyuki; Sasano, Hisaoki; Uehara, Shigeaki; Nakano, Osamu; Shibata, Michio

    1983-01-01

    It can be said that titanium and zirconium are new metals from the viewpoint of the history of metals, but both have grown to the materials supporting modern industries, titanium alloys in aerospace and ocean development, and zirconium alloys in nuclear power application. However, the properties of both alloys have not yet been clarified. In this study, the synthesis of TiNi and its properties, precipitation hardening type titanium alloys, and the effect of oxygen on the mechanical properties of both alloys were examined. TiNi is the typical intermetallic compound which shows the peculiar properties. The method of its synthesis by diffusion was examined, and it was clarified that it is useful as a structural material and also as a functional material. Precipitation hardening type alloys have not been developed in titanium alloys, but in this study, the feasibility of several alloy systems was found. Both titanium and zirconium have large affinity to oxygen, and the oxygen absorbed in the manufacturing process cannot be reduced. The tensile property of both alloys was examined in wide temperature range, and the effect of oxygen was clarified. (Kako, I.)

  20. WAMS Based Intelligent Operation and Control of Modern Power System with large Scale Renewable Energy Penetration

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain

    security limits. Under such scenario, progressive displacement of conventional generation by wind generation is expected to eventually lead a complex power system with least presence of central power plants. Consequently the support from conventional power plants is expected to reach its all-time low...... system voltage control responsibility from conventional power plants to wind turbines. With increased wind penetration and displaced conventional central power plants, dynamic voltage security has been identified as one of the challenging issue for large scale wind integration. To address the dynamic...... security issue, a WAMS based systematic voltage control scheme for large scale wind integrated power system has been proposed. Along with the optimal reactive power compensation, the proposed scheme considers voltage support from wind farms (equipped with voltage support functionality) and refurbished...

  1. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  2. Investigation of photocatalytic activity of titanium dioxide coating deposited on aluminium alloy substrate by plasma technique

    DEFF Research Database (Denmark)

    Daviðsdóttir, Svava; Soyama, Juliano; Dirscherl, Kai

    2011-01-01

    . Literature consists of large number of publications on titanium dioxide coating for self-cleaning applications, with glass as the main substrate. Only little work is available on TiO2 coating of metallic alloys used for engineering applications. Engineering materials, such as light-weight aluminium and steel...... have wide spread technological applications, where a combination of self-cleaning properties has a huge business potential. The results presented in this paper demonstrate superior photocatalytic properties of TiO2 coated aluminium compared to nano-scale TiO2 coating on glass substrate. The thickness...

  3. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  4. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  5. Development of a database system for operational use in the selection of titanium alloys

    Science.gov (United States)

    Han, Yuan-Fei; Zeng, Wei-Dong; Sun, Yu; Zhao, Yong-Qing

    2011-08-01

    The selection of titanium alloys has become a complex decision-making task due to the growing number of creation and utilization for titanium alloys, with each having its own characteristics, advantages, and limitations. In choosing the most appropriate titanium alloys, it is very essential to offer a reasonable and intelligent service for technical engineers. One possible solution of this problem is to develop a database system (DS) to help retrieve rational proposals from different databases and information sources and analyze them to provide useful and explicit information. For this purpose, a design strategy of the fuzzy set theory is proposed, and a distributed database system is developed. Through ranking of the candidate titanium alloys, the most suitable material is determined. It is found that the selection results are in good agreement with the practical situation.

  6. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  7. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  8. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  9. Plasmonic nanoparticle lithography: Fast resist-free laser technique for large-scale sub-50 nm hole array fabrication

    Science.gov (United States)

    Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.

    2018-05-01

    Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.

  10. Plasmonic Titanium Nitride Nanostructures via Nitridation of Nanopatterned Titanium Dioxide

    DEFF Research Database (Denmark)

    Guler, Urcan; Zemlyanov, Dmitry; Kim, Jongbum

    2017-01-01

    Plasmonic titanium nitride nanostructures are obtained via nitridation of titanium dioxide. Nanoparticles acquired a cubic shape with sharper edges following the rock-salt crystalline structure of TiN. Lattice constant of the resulting TiN nanoparticles matched well with the tabulated data. Energy...

  11. Ligand-tailored single-site silica supported titanium catalysts: Synthesis, characterization and towards cyanosilylation reaction

    International Nuclear Information System (INIS)

    Xu, Wei; Li, Yani; Yu, Bo; Yang, Jindou; Zhang, Ying; Chen, Xi; Zhang, Guofang; Gao, Ziwei

    2015-01-01

    A successive anchoring of Ti(NMe 2 ) 4 , cyclopentadiene and a O-donor ligand, 1-hydroxyethylbenzene (PEA), 1,1′-bi-2-naphthol (Binol) or 2,3-dihydroxybutanedioic acid diethyl ester (Tartrate), on silica was conducted by SOMC strategy in moderate conditions. The silica, monitored by in-situ Fourier transform infrared spectroscopy (in-situ FT-IR), was pretreated at different temperatures (200, 500 and 800 °C). The ligand tailored silica-supported titanium complexes were characterized by in-situ FT-IR, 13 C CP MAS-NMR, X-ray photoelectron spectroscopy (XPS), X-ray absorption near edge structure (XANES) and elemental analysis in detail, verifying that the surface titanium species are single sited. The catalytic activity of the ligand tailored single-site silica supported titanium complexes was evaluated by a cyanosilylation of benzaldehyde. The results showed that the catalytic activity is dependent strongly on the dehydroxylation temperatures of silica and the configuration of the ligands. - Graphical abstract: The ligand-tailored silica supported “single site” titanium complexes were synthesized by SOMC strategy and fully characterized. Their catalytic activity were evaluated by benzaldehyde silylcyanation. - Highlights: • Single-site silica supported Ti active species was prepared by SOMC technique. • O-donor ligand tailored Ti surface species was synthesized. • The surface species was characterized by XPS, 13 C CP-MAS NMR, XANES etc. • Catalytic activity of the Ti active species in silylcyanation reaction was evaluated

  12. Ligand-tailored single-site silica supported titanium catalysts: Synthesis, characterization and towards cyanosilylation reaction

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Wei; Li, Yani; Yu, Bo; Yang, Jindou; Zhang, Ying; Chen, Xi; Zhang, Guofang, E-mail: gfzhang@snnu.edu.cn; Gao, Ziwei, E-mail: zwgao@snnu.edu.cn

    2015-01-15

    A successive anchoring of Ti(NMe{sub 2}){sub 4}, cyclopentadiene and a O-donor ligand, 1-hydroxyethylbenzene (PEA), 1,1′-bi-2-naphthol (Binol) or 2,3-dihydroxybutanedioic acid diethyl ester (Tartrate), on silica was conducted by SOMC strategy in moderate conditions. The silica, monitored by in-situ Fourier transform infrared spectroscopy (in-situ FT-IR), was pretreated at different temperatures (200, 500 and 800 °C). The ligand tailored silica-supported titanium complexes were characterized by in-situ FT-IR, {sup 13}C CP MAS-NMR, X-ray photoelectron spectroscopy (XPS), X-ray absorption near edge structure (XANES) and elemental analysis in detail, verifying that the surface titanium species are single sited. The catalytic activity of the ligand tailored single-site silica supported titanium complexes was evaluated by a cyanosilylation of benzaldehyde. The results showed that the catalytic activity is dependent strongly on the dehydroxylation temperatures of silica and the configuration of the ligands. - Graphical abstract: The ligand-tailored silica supported “single site” titanium complexes were synthesized by SOMC strategy and fully characterized. Their catalytic activity were evaluated by benzaldehyde silylcyanation. - Highlights: • Single-site silica supported Ti active species was prepared by SOMC technique. • O-donor ligand tailored Ti surface species was synthesized. • The surface species was characterized by XPS, {sup 13}C CP-MAS NMR, XANES etc. • Catalytic activity of the Ti active species in silylcyanation reaction was evaluated.

  13. Large scale analysis of signal reachability.

    Science.gov (United States)

    Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer

    2014-06-15

    Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. © The Author 2014

  14. Comparative study about hydrogen sorption in sponge and powder titanium

    International Nuclear Information System (INIS)

    Vasut, Felicia; Preda, Anisoara; Zamfirache, Marius; Ducu, Catalin; Malinovschi, Viorel

    2005-01-01

    Currently, hydrogen may be stored as a compressed gas or a cryogenic liquid. Neither method appears to be practical for many applications in which hydrogen use would otherwise be attractive. For example, gaseous storage of stationary fuel is not feasible because of the large volume or weight of the storage vessels. Liquid hydrogen could be use extensively but the liquefaction process is relatively expensive. The hydrogen can be stored for a long term with a high separation factor, as a solid metal hydride. Using hydride-forming metals and intermetallic compounds, for example, recovery, purification and storage of heavy isotopes in tritium containing system, can solve many problems arising in the nuclear-fuel cycle. The paper presents a comparative study about hydrogen sorption on two titanium structures: powder and sponge. Also, it is presented the characterization, by X-Ray diffraction, of two structures, before and after sorption process. From our results, one can conclude that sorption method is efficient for both samples. Kinetic curves indicates that sorption rate for titanium powder is lower than for sponge titanium. This is the effect of reaction surface, which is larger for powder titanium. Sorption capacity for hydrogen is lower in powder titanium for identical experimental conditions. The difference between storage capacities could be explained by activation temperature, which was lower for titanium powder than for sponge. (authors)

  15. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  16. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  17. Large-Scale Quantum Many-Body Perturbation on Spin and Charge Separation in the Excited States of the Synthesized Donor-Acceptor Hybrid PBI-Macrocycle Complex.

    Science.gov (United States)

    Ziaei, Vafa; Bredow, Thomas

    2017-03-17

    The reliable calculation of the excited states of charge-transfer (CT) compounds poses a major challenge to the ab initio community because the frequently employed method, time-dependent density functional theory (TD-DFT), massively relies on the underlying density functional, resulting in heavily Hartree-Fock (HF) exchange-dependent excited-state energies. By applying the highly sophisticated many-body perturbation approach, we address the encountered unreliabilities and inconsistencies of not optimally tuned (standard) TD-DFT regarding photo-excited CT phenomena, and present results concerning accurate vertical transition energies and the correct energetic ordering of the CT and the first visible singlet state of a recently synthesized thermodynamically stable large hybrid perylene bisimide-macrocycle complex. This is a large-scale application of the quantum many-body perturbation approach to a chemically relevant CT system, demonstrating the system-size independence of the quality of the many-body-based excitation energies. Furthermore, an optimal tuning of the ωB97X hybrid functional can well reproduce the many-body results, making TD-DFT a suitable choice but at the expense of introducing a range-separation parameter, which needs to be optimally tuned. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  19. Chromatic Titanium Photoanode for Dye-Sensitized Solar Cells under Rear Illumination.

    Science.gov (United States)

    Huang, Chih-Hsiang; Chen, Yu-Wen; Chen, Chih-Ming

    2018-01-24

    Titanium (Ti) has high potential in many practical applications such as biomedicine, architecture, aviation, and energy. In this study, we demonstrate an innovative application of dye-sensitized solar cells (DSSCs) based on Ti photoanodes that can be integrated into the roof engineering of large-scale architectures. A chromatic Ti foil produced by anodizing oxidation (coloring) technology is an attractive roof material for large-scale architecture, showing a colorful appearance due to the formation of a reflective TiO 2 thin layer on both surfaces of Ti. The DSSC is fabricated on the backside of the chromatic Ti foil using the Ti foil as the working electrode, and this roof-DSSC hybrid configuration can be designed as an energy harvesting device for indoor artificial lighting. Our results show that the facet-textured TiO 2 layer on the chromatic Ti foil not only improves the optical reflectance for better light utilization but also effectively suppresses the charge recombination for better electron collection. The power conversion efficiency of the roof-DSSC hybrid system is improved by 30-40% with a main contribution from an improvement of short-circuit current density under standard 1 sun and dim-light (600-1000 lx) illumination.

  20. Large-scale preparation of hollow graphitic carbon nanospheres

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jun; Li, Fu [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Bai, Yu-Jun, E-mail: byj97@126.com [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); State Key laboratory of Crystal Materials, Shandong University, Jinan 250100 (China); Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Lu, Xi-Feng [Lunan Institute of Coal Chemical Engineering, Jining 272000 (China)

    2013-01-15

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 Degree-Sign C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g{sup -1} after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 Degree-Sign C, which exhibit superior electrochemical performance to graphite. Highlights: Black-Right-Pointing-Pointer Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 Degree-Sign C Black-Right-Pointing-Pointer The preparation is simple, effective and eco-friendly. Black-Right-Pointing-Pointer The in situ yielded MgO nanocrystals promote the graphitization. Black-Right-Pointing-Pointer The HGCNSs exhibit superior electrochemical performance to graphite.

  1. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  2. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  3. Improved multifilamentary Nb3Sn conductors produced by the titanium-bronze process

    International Nuclear Information System (INIS)

    Tachikawa, K.; Itoh, K.; Kamata, K.; Moriai, H.; Tada, N.

    1985-01-01

    The effects of a titanium addition to the bronze matrix of superconducting Nb 3 Sn wires have been investigated. The titanium addition to the matrix remarkably increases the Nb 3 Sn growth rate and the high-field, critical current density of the wire. An overall critical-current density of 3.8 . 10 4 A/cm 2 at 15 T has been obtained for the multifilamentary Nb/Cu-7.5 at.% Sn-0.4 at.% Ti wire with 4.7 μm-diameter 31 x 331 cores. The anisotropy in the critical current with respect to the field direction becomes larger with increasing aspect ratio of the rectangular-shaped multifilamentary wires. A 9.5 mm wide and 1.8mm thick Nb/Cu-7.5Sn-0.4Ti conductor with 5 μm-diameter 349 x 361=125 989 cores has been successfully fabricated on an industrial scale. This conductor carries a superconducting current of over 1300 A at 16.5 T. The newly developed Ti-bronze Nb 3 Sn conductor makes it feasible to generate a field of proportional 15 T in a large diameter bore. (orig.)

  4. Mechanical verification of soft-tissue attachment on bioactive glasses and titanium implants.

    Science.gov (United States)

    Zhao, Desheng; Moritz, Niko; Vedel, Erik; Hupa, Leena; Aro, Hannu T

    2008-07-01

    Soft-tissue attachment is a desired feature of many clinical biomaterials. The aim of the current study was to design a suitable experimental method for tensile testing of implant incorporation with soft-tissues. Conical implants were made of three compositions of bioactive glass (SiO(2)-P(2)O(5)-B(2)O(3)-Na(2)O-K(2)O-CaO-MgO) or titanium fiber mesh (porosity 84.7%). The implants were surgically inserted into the dorsal subcutaneous soft-tissue or back muscles in the rat. Soft-tissue attachment was evaluated by pull-out testing using a custom-made jig 8 weeks after implantation. Titanium fiber mesh implants had developed a relatively high pull-out force in subcutaneous tissue (12.33+/-5.29 N, mean+/-SD) and also measurable attachment with muscle tissue (2.46+/-1.33 N). The bioactive glass implants failed to show mechanically relevant soft-tissue bonding. The experimental set-up of mechanical testing seems to be feasible for verification studies of soft-tissue attachment. The inexpensive small animal model is beneficial for large-scale in vivo screening of new biomaterials.

  5. On the characteristics and application of thin wall welded titanium tubes for heat transfer

    International Nuclear Information System (INIS)

    Nishimura, Takashi; Miyamoto, Yoshiyuki

    1985-01-01

    Because of the excellent corrosion resistance, thin wall welded titanium tubes have become to be used in large number as the heat transfer tubes of condensers and seawater desalting plants using seawater in place of conventional copper alloy tubes. Especially in nuclear power plants, the all titanium condensers using thin wall welded titanium tubes and titanium tube plates were adopted in the almost all plants under construction or expected to be constructed. In this report, the various characteristics of thin wall welded titanium tubes required for using them as heat transfer tubes, such as corrosion resistance, heat transfer characteristics, fatigue strength and expanding characteristics, are outlined, and the state of use is described. At first, relatively thick seamless titanium tubes were used for chemical industry, but thereafter, due to the advance of the mass production techniques, the welded titanium tubes of less than 0.7 mm thickness and high quality have become to be supplied at low cost. In 1969, titanium tubes were used for the first time in Japan for the air cooler in the condenser of Akita Power Station, Tohoku Electric Power Co., Inc. The features of titanium are small specific gravity, small linear expansion coefficient and small Young's modulus. (Kako, I.)

  6. Ligand-tailored single-site silica supported titanium catalysts: Synthesis, characterization and towards cyanosilylation reaction

    Science.gov (United States)

    Xu, Wei; Li, Yani; Yu, Bo; Yang, Jindou; Zhang, Ying; Chen, Xi; Zhang, Guofang; Gao, Ziwei

    2015-01-01

    A successive anchoring of Ti(NMe2)4, cyclopentadiene and a O-donor ligand, 1-hydroxyethylbenzene (PEA), 1,1‧-bi-2-naphthol (Binol) or 2,3-dihydroxybutanedioic acid diethyl ester (Tartrate), on silica was conducted by SOMC strategy in moderate conditions. The silica, monitored by in-situ Fourier transform infrared spectroscopy (in-situ FT-IR), was pretreated at different temperatures (200, 500 and 800 °C). The ligand tailored silica-supported titanium complexes were characterized by in-situ FT-IR, 13C CP MAS-NMR, X-ray photoelectron spectroscopy (XPS), X-ray absorption near edge structure (XANES) and elemental analysis in detail, verifying that the surface titanium species are single sited. The catalytic activity of the ligand tailored single-site silica supported titanium complexes was evaluated by a cyanosilylation of benzaldehyde. The results showed that the catalytic activity is dependent strongly on the dehydroxylation temperatures of silica and the configuration of the ligands.

  7. Artefacts in multimodal imaging of titanium, zirconium and binary titanium-zirconium alloy dental implants: an in vitro study.

    Science.gov (United States)

    Smeets, Ralf; Schöllchen, Maximilian; Gauer, Tobias; Aarabi, Ghazal; Assaf, Alexandre T; Rendenbach, Carsten; Beck-Broichsitter, Benedicta; Semmusch, Jan; Sedlacik, Jan; Heiland, Max; Fiehler, Jens; Siemonsen, Susanne

    2017-02-01

    To analyze and evaluate imaging artefacts induced by zirconium, titanium and titanium-zirconium alloy dental implants. Zirconium, titanium and titanium-zirconium alloy implants were embedded in gelatin and MRI, CT and CBCT were performed. Standard protocols were used for each modality. For MRI, line-distance profiles were plotted to quantify the accuracy of size determination. For CT and CBCT, six shells surrounding the implant were defined every 0.5 cm from the implant surface and histogram parameters were determined for each shell. While titanium and titanium-zirconium alloy induced extensive signal voids in MRI owing to strong susceptibility, zirconium implants were clearly definable with only minor distortion artefacts. For titanium and titanium-zirconium alloy, the MR signal was attenuated up to 14.1 mm from the implant. In CT, titanium and titanium-zirconium alloy resulted in less streak artefacts in comparison with zirconium. In CBCT, titanium-zirconium alloy induced more severe artefacts than zirconium and titanium. MRI allows for an excellent image contrast and limited artefacts in patients with zirconium implants. CT and CBCT examinations are less affected by artefacts from titanium and titanium-zirconium alloy implants compared with MRI. The knowledge about differences of artefacts through different implant materials and image modalities might help support clinical decisions for the choice of implant material or imaging device in the clinical setting.

  8. Large Rotor Test Apparatus

    Data.gov (United States)

    Federal Laboratory Consortium — This test apparatus, when combined with the National Full-Scale Aerodynamics Complex, produces a thorough, full-scale test capability. The Large Rotor Test Apparatus...

  9. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  10. Alkaline corrosion properties of laser-clad aluminum/titanium coatings

    DEFF Research Database (Denmark)

    Aggerbeck, Martin; Herbreteau, Alexis; Rombouts, Marleen

    2015-01-01

    Purpose - The purpose of this paper is to study the use of titanium as a protecting element for aluminum in alkaline conditions. Design/methodology/approach - Aluminum coatings containing up to 20 weight per cent Ti6Al4V were produced using laser cladding and were investigated using light optical...... microscope, scanning electron microscope - energy-dispersive X-ray spectroscopy and X-Ray Diffraction, together with alkaline exposure tests and potentiodynamic measurements at pH 13.5. Findings - Cladding resulted in a heterogeneous solidification microstructure containing an aluminum matrix...... with supersaturated titanium ( (1 weight per cent), Al3Ti intermetallics and large partially undissolved Ti6Al4V particles. Heat treatment lowered the titanium concentration in the aluminum matrix, changed the shape of the Al3Ti precipitates and increased the degree of dissolution of the Ti6Al4V particles. Corrosion...

  11. Titanium biomaterials with complex surfaces induced aberrant peripheral circadian rhythms in bone marrow mesenchymal stromal cells.

    Science.gov (United States)

    Hassan, Nathaniel; McCarville, Kirstin; Morinaga, Kenzo; Mengatto, Cristiane M; Langfelder, Peter; Hokugo, Akishige; Tahara, Yu; Colwell, Christopher S; Nishimura, Ichiro

    2017-01-01

    Circadian rhythms maintain a high level of homeostasis through internal feed-forward and -backward regulation by core molecules. In this study, we report the highly unusual peripheral circadian rhythm of bone marrow mesenchymal stromal cells (BMSCs) induced by titanium-based biomaterials with complex surface modifications (Ti biomaterial) commonly used for dental and orthopedic implants. When cultured on Ti biomaterials, human BMSCs suppressed circadian PER1 expression patterns, while NPAS2 was uniquely upregulated. The Ti biomaterials, which reduced Per1 expression and upregulated Npas2, were further examined with BMSCs harvested from Per1::luc transgenic rats. Next, we addressed the regulatory relationship between Per1 and Npas2 using BMSCs from Npas2 knockout mice. The Npas2 knockout mutation did not rescue the Ti biomaterial-induced Per1 suppression and did not affect Per2, Per3, Bmal1 and Clock expression, suggesting that the Ti biomaterial-induced Npas2 overexpression was likely an independent phenomenon. Previously, vitamin D deficiency was reported to interfere with Ti biomaterial osseointegration. The present study demonstrated that vitamin D supplementation significantly increased Per1::luc expression in BMSCs, though the presence of Ti biomaterials only moderately affected the suppressed Per1::luc expression. Available in vivo microarray data from femurs exposed to Ti biomaterials in vitamin D-deficient rats were evaluated by weighted gene co-expression network analysis. A large co-expression network containing Npas2, Bmal1, and Vdr was observed to form with the Ti biomaterials, which was disintegrated by vitamin D deficiency. Thus, the aberrant BMSC peripheral circadian rhythm may be essential for the integration of Ti biomaterials into bone.

  12. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  13. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    Science.gov (United States)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  14. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  15. OSTEOCALCIN DINAMIC OF DISTROPHICAL BONE KISTS BY TITANIUM NIKELID POROUS MATERIALS IMPLANTATION IN CHILDREN

    Directory of Open Access Journals (Sweden)

    I. I. Kuzhelivsky

    2015-01-01

    Full Text Available The article presents results of bone kists treatment by porous granular titanium nikelid materials and dynamic of osteokalcin. A comparative examination with standard treatment technology group demonstrated high efficiency of a proposed method. Porous granular titanium nikelid materials possess mechanical strength, optimization of regeneration at the expense of osteoinductivity by osteokalcin and allow you to effectively fill the cavity with a complex anatomical structure. 

  16. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  17. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  18. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  19. The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  20. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    Science.gov (United States)

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.