WorldWideScience

Sample records for processing large amounts

  1. Another way of managing large amounts of data

    CERN Multimedia

    2009-01-01

    Jeff Hammerbacher is Vice President of Products and Chief Scientist at Cloudera, a US software company that provides solutions for managing and analysing very large data sets. His invited talk on 21 August was a good opportunity to exchange views with the CERN experts who face similar problems. Although still relatively young, Jeff has considerable experience in developing tools for storing and processing large amounts of data. Before Cloudera Jeff conceived, built and led the Data team at Facebook. He has also worked as a quantitative analyst on Wall Street. Jeff holds a Bachelor’s Degree in mathematics from Harvard University. At CERN, handling large amounts of data is the job of the Grid; Hadoop, the software Cloudera is developing, is intended for the same scope but has different technical features and implementations. "The Grid software products are designed for many organisations to collaborate on large-scale data analysis across many data centres. In contrast, Had...

  2. A simple biosynthetic pathway for large product generation from small substrate amounts

    Science.gov (United States)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways.

  3. A simple biosynthetic pathway for large product generation from small substrate amounts

    Energy Technology Data Exchange (ETDEWEB)

    Djordjevic, Marko [Institute of Physiology and Biochemistry, Faculty of Biology, University of Belgrade (Serbia); Djordjevic, Magdalena [Institute of Physics Belgrade, University of Belgrade (Serbia)

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways. (paper)

  4. A simple biosynthetic pathway for large product generation from small substrate amounts

    International Nuclear Information System (INIS)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-01-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways. (paper)

  5. Analytics to Better Interpret and Use Large Amounts of Heterogeneous Data

    Science.gov (United States)

    Mathews, T. J.; Baskin, W. E.; Rinsland, P. L.

    2014-12-01

    Data scientists at NASA's Atmospheric Science Data Center (ASDC) are seasoned software application developers who have worked with the creation, archival, and distribution of large datasets (multiple terabytes and larger). In order for ASDC data scientists to effectively implement the most efficient processes for cataloging and organizing data access applications, they must be intimately familiar with data contained in the datasets with which they are working. Key technologies that are critical components to the background of ASDC data scientists include: large RBMSs (relational database management systems) and NoSQL databases; web services; service-oriented architectures; structured and unstructured data access; as well as processing algorithms. However, as prices of data storage and processing decrease, sources of data increase, and technologies advance - granting more people to access to data at real or near-real time - data scientists are being pressured to accelerate their ability to identify and analyze vast amounts of data. With existing tools this is becoming exceedingly more challenging to accomplish. For example, NASA Earth Science Data and Information System (ESDIS) alone grew from having just over 4PBs of data in 2009 to nearly 6PBs of data in 2011. This amount then increased to roughly10PBs of data in 2013. With data from at least ten new missions to be added to the ESDIS holdings by 2017, the current volume will continue to grow exponentially and drive the need to be able to analyze more data even faster. Though there are many highly efficient, off-the-shelf analytics tools available, these tools mainly cater towards business data, which is predominantly unstructured. Inadvertently, there are very few known analytics tools that interface well to archived Earth science data, which is predominantly heterogeneous and structured. This presentation will identify use cases for data analytics from an Earth science perspective in order to begin to identify

  6. Discovering Related Clinical Concepts Using Large Amounts of Clinical Notes.

    Science.gov (United States)

    Ganesan, Kavita; Lloyd, Shane; Sarkar, Vikren

    2016-01-01

    The ability to find highly related clinical concepts is essential for many applications such as for hypothesis generation, query expansion for medical literature search, search results filtering, ICD-10 code filtering and many other applications. While manually constructed medical terminologies such as SNOMED CT can surface certain related concepts, these terminologies are inadequate as they depend on expertise of several subject matter experts making the terminology curation process open to geographic and language bias. In addition, these terminologies also provide no quantifiable evidence on how related the concepts are. In this work, we explore an unsupervised graphical approach to mine related concepts by leveraging the volume within large amounts of clinical notes. Our evaluation shows that we are able to use a data driven approach to discovering highly related concepts for various search terms including medications, symptoms and diseases.

  7. Discovering Related Clinical Concepts Using Large Amounts of Clinical Notes

    Directory of Open Access Journals (Sweden)

    Kavita Ganesan

    2016-01-01

    Full Text Available The ability to find highly related clinical concepts is essential for many applications such as for hypothesis generation, query expansion for medical literature search, search results filtering, ICD-10 code filtering and many other applications. While manually constructed medical terminologies such as SNOMED CT can surface certain related concepts, these terminologies are inadequate as they depend on expertise of several subject matter experts making the terminology curation process open to geographic and language bias. In addition, these terminologies also provide no quantifiable evidence on how related the concepts are. In this work, we explore an unsupervised graphical approach to mine related concepts by leveraging the volume within large amounts of clinical notes. Our evaluation shows that we are able to use a data driven approach to discovering highly related concepts for various search terms including medications, symptoms and diseases.

  8. Highly Sensitive GMO Detection Using Real-Time PCR with a Large Amount of DNA Template: Single-Laboratory Validation.

    Science.gov (United States)

    Mano, Junichi; Hatano, Shuko; Nagatomi, Yasuaki; Futo, Satoshi; Takabatake, Reona; Kitta, Kazumi

    2018-03-01

    Current genetically modified organism (GMO) detection methods allow for sensitive detection. However, a further increase in sensitivity will enable more efficient testing for large grain samples and reliable testing for processed foods. In this study, we investigated real-time PCR-based GMO detection methods using a large amount of DNA template. We selected target sequences that are commonly introduced into many kinds of GM crops, i.e., 35S promoter and nopaline synthase (NOS) terminator. This makes the newly developed method applicable to a wide range of GMOs, including some unauthorized ones. The estimated LOD of the new method was 0.005% of GM maize events; to the best of our knowledge, this method is the most sensitive among the GM maize detection methods for which the LOD was evaluated in terms of GMO content. A 10-fold increase in the DNA amount as compared with the amount used under common testing conditions gave an approximately 10-fold reduction in the LOD without PCR inhibition. Our method is applicable to various analytical samples, including processed foods. The use of other primers and fluorescence probes would permit highly sensitive detection of various recombinant DNA sequences besides the 35S promoter and NOS terminator.

  9. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Science.gov (United States)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  10. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2018-01-01

    Full Text Available The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW. The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration, meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  11. Large amounts of antiproton production by heavy ion collision

    International Nuclear Information System (INIS)

    Takahashi, Hiroshi; Powell, J.

    1987-01-01

    To produce large amounts of antiprotons, on the order of several grams/year, use of machines to produce nuclear collisions are studied. These can be of either proton-proton, proton-nucleus and nucleus-nucleus in nature. To achieve high luminosity colliding beams, on the order of 10 41 m/cm 2 , a self-colliding machine is required, rather than a conventional circular colliding type. The self-colliding machine can produce additional antiprotons through successive collisions of secondary particles, such as spectator nucleons. A key problem is how to collect the produced antiprotons without capture by beam nuclei in the collision zone. Production costs for anti-matter are projected for various energy source options and technology levels. Dedicated facilities using heavy ion collisions could produce antiproton at substantially less than 1 million $/milligram. With co-production of other valuable products, e.g., nuclear fuel for power reactors, antiproton costs could be reduced to even lower values

  12. Large amounts of antiproton production by heavy ion collision

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hiroshi; Powell, J.

    1987-01-01

    To produce large amounts of antiprotons, on the order of several grams/year, use of machines to produce nuclear collisions are studied. These can be of either proton-proton, proton-nucleus and nucleus-nucleus in nature. To achieve high luminosity colliding beams, on the order of 10/sup 41/ m/cm/sup 2/, a self-colliding machine is required, rather than a conventional circular colliding type. The self-colliding machine can produce additional antiprotons through successive collisions of secondary particles, such as spectator nucleons. A key problem is how to collect the produced antiprotons without capture by beam nuclei in the collision zone. Production costs for anti-matter are projected for various energy source options and technology levels. Dedicated facilities using heavy ion collisions could produce antiproton at substantially less than 1 million $/milligram. With co-production of other valuable products, e.g., nuclear fuel for power reactors, antiproton costs could be reduced to even lower values.

  13. Integration of large amounts of wind power. Markets for trading imbalances

    Energy Technology Data Exchange (ETDEWEB)

    Neimane, Viktoria; Axelsson, Urban [Vattenfall Research and Development AB, Stockholm (Sweden); Gustafsson, Johan; Gustafsson, Kristian [Vattenfall Nordic Generation Management, Stockholm (Sweden); Murray, Robin [Vattenfall Vindkraft AB, Stockholm (Sweden)

    2008-07-01

    The well-known concerns about wind power are related to its intermittent nature and difficulty to make exact forecasts. The expected increase in balancing and reserve requirements due to wind power has been investigated in several studies. This paper takes the next step in studying integration of large amounts of wind power in Sweden. Several wind power producers' and corresponding balance providers' perspective is taken and their imbalance costs modeled. Larger producers having wind power spread over larger geographical areas will have lower relative costs than producers having their units concentrated within limited geographical area. Possibilities of the wind power producers to reduce the imbalance costs by acting on after sales market are exposed and compared. (orig.)

  14. A mesh density study for application to large deformation rolling process evaluation

    International Nuclear Information System (INIS)

    Martin, J.A.

    1997-12-01

    When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated

  15. Alternative containers for low-level wastes containing large amounts of tritium

    International Nuclear Information System (INIS)

    Gause, E.P.; Lee, B.S.; MacKenzie, D.R.; Wiswall, R. Jr.

    1984-11-01

    High-activity tritiated waste generated in the United States is mainly composed of tritium gas and tritium-contaminated organic solvents sorbed onto Speedi-Dri which are packaged in small glass bulbs. Low-activity waste consists of solidified and adsorbed liquids. In this report, current packages for high-activity gaseous and low-activity adsorbed liquid wastes are emphasized with regard to containment potential. Containers for low-level radioactive waste containing large amounts of tritium need to be developed. An integrity may be threatened by: physical degradation due to soil corrosion, gas pressure build-up (due to radiolysis and/or biodegradation), rapid permeation of tritium through the container, and corrosion from container contents. Literature available on these points is summarized in this report. 136 references, 20 figures, 40 tables

  16. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  17. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  18. Proposal of the concept of selection of accidents that release large amounts of radioactive substances in the high temperature engineering test reactor

    International Nuclear Information System (INIS)

    Ono, Masato; Honda, Yuki; Takada, Shoji; Sawa, Kazuhiro

    2015-01-01

    In Position, construction and equipment of testing and research reactor to be subjected to the use standards for rules Article 53 (prevention of expansion of the accident to release a large amount of radioactive material) generation the frequency is a lower accident than design basis accident, when what is likely to release a large amount of radioactive material or radiation from the facility has occurred, and take the necessary measures in order to prevent the spread of the accident. There is provided a lower accident than frequency design basis accidents, for those that may release a large amount of radioactive material or radiation. (author)

  19. Improved recovery of trace amounts of gold (III), palladium (II) and platinum (IV) from large amounts of associated base metals using anion-exchange resins

    Energy Technology Data Exchange (ETDEWEB)

    Matsubara, I. [Lab. of Chemistry, Tokyo Women' s Medical Univ. (Japan); Takeda, Y.; Ishida, K. [Lab. of Chemistry, Nippon Medical School, Kawasaki-shi, Kanagawa-ken (Japan)

    2000-02-01

    The adsorption and desorption behaviors of gold (III), palladium (II) and platinum (IV) were surveyed in column chromatographic systems consisting of one of the conventional anion-exchange resins of large ion-exchange capacity and dilute thiourea solutions. The noble metals were strongly adsorbed on the anion-exchange resins from dilute hydrochloric acid, while most base metals did not show any marked adsorbability. These facts made it possible to separate the noble metals from a large quantity of base metals such as Ag (I), Al (III), Co (II), Cu (II), Fe (III), Mn (II), Ni (II), Pb (II), and Zn (II). Although it used to be very difficult to desorb the noble metals from the resins used, the difficulty was easily overcome by use of dilute thiourea solutions as an eluant. In the present study, as little as 1.00 {mu}g of the respective noble metals was quantitatively separated and recovered from as much as ca. 10 mg of a number of metals on a small column by elution with a small amount of dilute thiourea solution. The present systems should be applicable to the separation, concentration and recovery of traces of the noble metals from a number of base metals coexisting in a more extended range of amounts and ratios. (orig.)

  20. Imbalance costs in the Swedish system with large amounts of wind power

    Energy Technology Data Exchange (ETDEWEB)

    Carlsson, Fredrik; Neimane, Viktoria [Vattenfall Research and Development AB, Stockholm (Sweden)

    2009-07-01

    The well-known concerns about wind power are related to its intermittent nature and difficulty to make exact forecasts. The expected increase in balancing and reserve requirements due to wind power has been investigated in several studies. This paper takes the next step in studying integration of large amounts of wind power in Sweden. Several wind power producers' and corresponding balance providers' perspective is taken and their imbalance costs modeled. Larger producers having wind power spread over larger geographical areas will have lower relative costs than producers having their units concentrated within limited geographical area. Possibilities of the wind power producers to reduce the imbalance costs by acting on after sales market are exposed and compared. (orig.)

  1. Knowledge discovery: Extracting usable information from large amounts of data

    International Nuclear Information System (INIS)

    Whiteson, R.

    1998-01-01

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best

  2. Processing large sensor data sets for safeguards : the knowledge generation system.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Maikel A.; Smartt, Heidi Anne; Matthews, Robert F.

    2012-04-01

    Modern nuclear facilities, such as reprocessing plants, present inspectors with significant challenges due in part to the sheer amount of equipment that must be safeguarded. The Sandia-developed and patented Knowledge Generation system was designed to automatically analyze large amounts of safeguards data to identify anomalous events of interest by comparing sensor readings with those expected from a process of interest and operator declarations. This paper describes a demonstration of the Knowledge Generation system using simulated accountability tank sensor data to represent part of a reprocessing plant. The demonstration indicated that Knowledge Generation has the potential to address several problems critical to the future of safeguards. It could be extended to facilitate remote inspections and trigger random inspections. Knowledge Generation could analyze data to establish trust hierarchies, to facilitate safeguards use of operator-owned sensors.

  3. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    Science.gov (United States)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  4. A Case of Special Complication following a Large Amount of Polyacrylamide Hydrogel Injected into the Epicranial Aponeurosis: Leukocytopenia

    Directory of Open Access Journals (Sweden)

    Li Rong

    2015-01-01

    Full Text Available Polyacrylamide hydrogel (PAAG has been used as an injectable filler for soft tissue augmentation of different body parts, such as the face, breasts, and penis. However, this is the first report of leukocytopenia after injection of a large amount of PAAG in the epicranial aponeurosis. After receiving PAAG injection for craniofacial contouring, the female patient described herein experienced recurrent swelling, temporal pain (particularly with changes in ambient temperature and facial expression, and ultimately leukocytopenia due to widespread migration of the injected PAAG. We removed most of the PAAG from the affected tissues and the leukocytopenia disappeared 1 year after the operation. Based on this case, we hypothesize that injection of a large amount of PAAG into tissues that have ample blood supply, such as the epicranial aponeurosis, may induce leukocytopenia.

  5. Determination of small amounts of nitric acid in the presence of large amounts of uranium (VI) and extraction of nitric acid into TBP solutions highly loaded with uranyl nitrate

    International Nuclear Information System (INIS)

    Kolarik, Z.; Schuler, R.

    1982-10-01

    A new method for the determination of small amounts of nitric acid in the presence of large amounts of uranium(VI) was elaborated. The method is based on the precipitation of uranium(VI) as iodate and subsequent alkalimetric titration of the acid in the supernatant. The extraction of nitric acid and uranium(VI) with 30% TBP in dodecane was studied at high loading of the organic phase with uranyl nitrate and at 25, 40 and 60 0 C. The results are compared with available published data on the extraction of nitric acid under similar conditions. (orig.) [de

  6. Impacts of large amounts of wind power on design and operation of power systems, results of IEA collaboration

    DEFF Research Database (Denmark)

    Holttinen, Hannele; Meibom, Peter; Orths, Antje

    2011-01-01

    There are dozens of studies made and ongoing related to wind integration. However, the results are not easy to compare. IEA WIND R&D Task 25 on Design and Operation of Power Systems with Large Amounts of Wind Power collects and shares information on wind generation impacts on power systems......, with analyses and guidelines on methodologies. In the state-of-the-art report (October, 2007), and the final report of the 3 years period (July, 2009) the most relevant wind power grid integration studies have been analysed especially regarding methodologies and input data. Several issues that impact...... on the amount of wind power that can be integrated have been identified. Large balancing areas and aggregation benefits of wide areas help in reducing the variability and forecast errors of wind power as well as help in pooling more cost effective balancing resources. System operation and functioning...

  7. A large amount synthesis of nanopowder using modulated induction thermal plasmas synchronized with intermittent feeding of raw materials

    International Nuclear Information System (INIS)

    Tanaka, Y; Tsuke, T; Guo, W; Uesugi, Y; Ishijima, T; Watanabe, S; Nakamura, K

    2012-01-01

    A large amount synthesis method for titanium dioxide (TiO 2 ) nanopowder is proposed by direct evaporation of titanium powders using Ar-O 2 pulse-modulated induction thermal plasma (PMITP). To realize a large amount synthesis of nanopowder, the PMITP method was combined with the intermittent and heavy load feeding of raw material powder, as well as the quenching gas injection. The intermittent powder feeding was synchronized with the modulation of the coil current sustaining the PMITP for complete evaporation of the injected powder. Synthesized particles by the developed method were analyzed by FE-SEM and XRD. Results indicated that the synthesized particles by the 20-kW PMITP with a heavy loading rate of 12.3 g min −1 had a similar particle size distribution with the mean diameter about 40 nm to those with light loading of 4.2 g min −1 .

  8. Very large amounts of radiation are needed to change cancer frequency

    International Nuclear Information System (INIS)

    Brooks, A.; Couch, L.

    2006-01-01

    Full text: A marked radio-phobia or excessive fear of radiation exposure is shared by the general public. A major factor in this fear is that the perception that each and every radiation-induced ionization increases the risk for cancer, thus even the smallest radiation exposure needs to be avoided. It is important to realize that this is not the case. It requires very large amounts of radiation delivered to large populations to produce an increase in cancer frequency. This has been demonstrated in many in experimental systems, animal studies and in human populations. If either the population size or the dose is reduced it is not possible to detect an increase in cancer frequency. This paper deals with real radiation-induced increases in cancer frequency that are statistically significant, rather than in extrapolated or calculated small increases in radiation-induced risks using linear models. Further, it demonstrates that there are barriers below which increases in cancer cannot be detected. Finally, the manuscript helps explain that there are transitions in the mechanisms of biological action as a function of radiation dose with very different mechanisms being triggered at high and at low doses. These transitions suggest the need for paradigm shifts. Concepts such as hit theory, independence in individual cellular responses and single mutations being responsible for cancer need to be re-evaluated. New paradigms such as b ystander effects , showing that the size of the responding target is much larger than the hit target, adaptive response demonstrating that cell/cell communication modifies individual cellular responses and genomic instability that is not dependent on radiation induced mutations in individual cells

  9. Transient Stability Assessment of Power System with Large Amount of Wind Power Penetration

    DEFF Research Database (Denmark)

    Liu, Leo; Chen, Zhe; Bak, Claus Leth

    2012-01-01

    Recently, the security and stability of power system with large amount of wind power are the concerned issues, especially the transient stability. In Denmark, the onshore and offshore wind farms are connected to distribution system and transmission system respectively. The control and protection...... methodologies of onshore and offshore wind farms definitely affect the transient stability of power system. In this paper, the onshore and offshore wind farms are modeled in detail in order to assess the transient stability of western Danish power system. Further, the computation of critical clearing time (CCT...... plants, load consumption level and high voltage direct current (HVDC) transmission links are taken into account. The results presented in this paper are able to provide an early awareness of power system security condition of the western Danish power system....

  10. Large transverse momentum hadronic processes

    International Nuclear Information System (INIS)

    Darriulat, P.

    1977-01-01

    The possible relations between deep inelastic leptoproduction and large transverse momentum (psub(t)) processes in hadronic collisions are usually considered in the framework of the quark-parton picture. Experiments observing the structure of the final state in proton-proton collisions producing at least one large transverse momentum particle have led to the following conclusions: a large fraction of produced particles are uneffected by the large psub(t) process. The other products are correlated to the large psub(t) particle. Depending upon the sign of scalar product they can be separated into two groups of ''towards-movers'' and ''away-movers''. The experimental evidence are reviewed favouring such a picture and the properties are discussed of each of three groups (underlying normal event, towards-movers and away-movers). Some phenomenological interpretations are presented. The exact nature of away- and towards-movers must be further investigated. Their apparent jet structure has to be confirmed. Angular correlations between leading away and towards movers are very informative. Quantum number flow, both within the set of away and towards-movers, and between it and the underlying normal event, are predicted to behave very differently in different models

  11. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    Science.gov (United States)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  12. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  13. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  14. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  15. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  16. Environmental contamination due to release of a large amount of tritium

    International Nuclear Information System (INIS)

    Kawai, Hiroshi

    1988-01-01

    Tritium release incidents have occurred many times in the Savannah Rever Plant in the U.S. A tritium release incident also took place in the Lawrence Livermore Laboratory. The present article outlines the reports by the plant and laboratory on these incidents and makes some comments on environmental contamination that may results from release of a large amount of tritium from nuclear fusion facilities. Tritium is normally released in the form of a combination of chemical compounds such as HT, DT and T 2 and oxides such as HTO, DTO and T 2 O. The percentage of the oxides is given in the reports by the plant. Oxides, which can be absorbed through the skin, are considered to be nearly a thousand times more toxic than the other type of tritium compounds. The HT type compounds (HT, DT and T 2 ) can be oxidized by microorganisms in soil into oxides (HTO, DTO and T 2 O) and therefore, great care should also given to this type of compounds. After each accidental tritium release, the health physics group of the plant collected various environmental samples, including ground surface water, milk, leaves of plants, soil and human urine, in leeward areas. Results on the contamination of surface water, fish and underground water are outlined and discussed. (Nogami, K.)

  17. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  18. Consistency check of iron and sodium cross sections with integral benchmark experiments using a large amount of experimental information

    International Nuclear Information System (INIS)

    Baechle, R.-D.; Hehn, G.; Pfister, G.; Perlini, G.; Matthes, W.

    1984-01-01

    Single material benchmark experiments are designed to check neutron and gamma cross-sections of importance for deep penetration problems. At various penetration depths a large number of activation detectors and spectrometers are placed to measure the radiation field as completely as possible. The large amount of measured data in benchmark experiments can be evaluated best by the global detector concept applied to nuclear data adjustment. A new iteration procedure is presented for adjustment of a large number of multigroup cross sections, which has been implemented now in the modular adjustment code ADJUST-EUR. A theoretical test problem has been deviced to check the total program system with high precision. The method and code are going to be applied for validating the new European Data Files (JEF and EFF) in progress. (Auth.)

  19. Large quantity production of carbon and boron nitride nanotubes by mechano-thermal process

    International Nuclear Information System (INIS)

    Chen, Y.; Fitzgerald, J.D.; Chadderton, L.; Williams, J.S.; Campbell, S.J.

    2002-01-01

    Full text: Nanotube materials including carbon and boron nitride have excellent properties compared with bulk materials. The seamless graphene cylinders with a high length to diameter ratio make them as superstrong fibers. A high amount of hydrogen can be stored into nanotubes as future clean fuel source. Theses applications require large quantity of nanotubes materials. However, nanotube production in large quantity, fully controlled quality and low costs remains challenges for most popular synthesis methods such as arc discharge, laser heating and catalytic chemical decomposition. Discovery of new synthesis methods is still crucial for future industrial application. The new low-temperature mechano-thermal process discovered by the current author provides an opportunity to develop a commercial method for bulk production. This mechano-thermal process consists of a mechanical ball milling and a thermal annealing processes. Using this method, both carbon and boron nitride nanotubes were produced. I will present the mechano-thermal method as the new bulk production technique in the conference. The lecture will summarise main results obtained. In the case of carbon nanotubes, different nanosized structures including multi-walled nanotubes, nanocells, and nanoparticles have been produced in a graphite sample using a mechano-thermal process, consisting of I mechanical milling at room temperature for up to 150 hours and subsequent thermal annealing at 1400 deg C. Metal particles have played an important catalytic effect on the formation of different tubular structures. While defect structure of the milled graphite appears to be responsible for the formation of small tubes. It is found that the mechanical treatment of graphite powder produces a disordered and microporous structure, which provides nucleation sites for nanotubes as well as free carbon atoms. Multiwalled carbon nanotubes appear to grow via growth of the (002) layers during thermal annealing. In the case of BN

  20. Generation Expansion Planning With Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    Energy Technology Data Exchange (ETDEWEB)

    Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui; Pinson, Pierre

    2017-07-01

    Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of wind power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.

  1. Quantitative analysis of large amounts of journalistic texts using topic modelling

    NARCIS (Netherlands)

    Jacobi, C.; van Atteveldt, W.H.; Welbers, K.

    2016-01-01

    The huge collections of news content which have become available through digital technologies both enable and warrant scientific inquiry, challenging journalism scholars to analyse unprecedented amounts of texts. We propose Latent Dirichlet Allocation (LDA) topic modelling as a tool to face this

  2. Expert system shell to reason on large amounts of data

    Science.gov (United States)

    Giuffrida, Gionanni

    1994-01-01

    The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.

  3. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP...

  4. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  5. Biological soil crusts emit large amounts of NO and HONO affecting the nitrogen cycle in drylands

    Science.gov (United States)

    Tamm, Alexandra; Wu, Dianming; Ruckteschler, Nina; Rodríguez-Caballero, Emilio; Steinkamp, Jörg; Meusel, Hannah; Elbert, Wolfgang; Behrendt, Thomas; Sörgel, Matthias; Cheng, Yafang; Crutzen, Paul J.; Su, Hang; Pöschl, Ulrich; Weber, Bettina

    2016-04-01

    Dryland systems currently cover ˜40% of the world's land surface and are still expanding as a consequence of human impact and global change. In contrast to that, information on their role in global biochemical processes is limited, probably induced by the presumption that their sparse vegetation cover plays a negligible role in global balances. However, spaces between the sparse shrubs are not bare, but soils are mostly covered by biological soil crusts (biocrusts). These biocrust communities belong to the oldest life forms, resulting from an assembly between soil particles and cyanobacteria, lichens, bryophytes, and algae plus heterotrophic organisms in varying proportions. Depending on the dominating organism group, cyanobacteria-, lichen-, and bryophyte-dominated biocrusts are distinguished. Besides their ability to restrict soil erosion they fix atmospheric carbon and nitrogen, and by doing this they serve as a nutrient source in strongly depleted dryland ecosystems. In this study we show that a fraction of the nitrogen fixed by biocrusts is metabolized and subsequently returned to the atmosphere in the form of nitric oxide (NO) and nitrous acid (HONO). These gases affect the radical formation and oxidizing capacity within the troposphere, thus being of particular interest to atmospheric chemistry. Laboratory measurements using dynamic chamber systems showed that dark cyanobacteria-dominated crusts emitted the largest amounts of NO and HONO, being ˜20 times higher than trace gas fluxes of nearby bare soil. We showed that these nitrogen emissions have a biogenic origin, as emissions of formerly strongly emitting samples almost completely ceased after sterilization. By combining laboratory, field, and satellite measurement data we made a best estimate of global annual emissions amounting to ˜1.1 Tg of NO-N and ˜0.6 Tg of HONO-N from biocrusts. This sum of 1.7 Tg of reactive nitrogen emissions equals ˜20% of the soil release under natural vegetation according

  6. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  7. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  8. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  9. Large forging manufacturing process

    Science.gov (United States)

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  10. On the use of Cloud Computing and Machine Learning for Large-Scale SAR Science Data Processing and Quality Assessment Analysi

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Geodetic imaging is revolutionizing geophysics, but the scope of discovery has been limited by labor-intensive technological implementation of the analyses. The Advanced Rapid Imaging and Analysis (ARIA) project has proven capability to automate SAR data processing and analysis. Existing and upcoming SAR missions such as Sentinel-1A/B and NISAR are also expected to generate massive amounts of SAR data. This has brought to the forefront the need for analytical tools for SAR quality assessment (QA) on the large volumes of SAR data-a critical step before higher-level time series and velocity products can be reliably generated. Initially leveraging an advanced hybrid-cloud computing science data system for performing large-scale processing, machine learning approaches were augmented for automated analysis of various quality metrics. Machine learning-based user-training of features, cross-validation, prediction models were integrated into our cloud-based science data processing flow to enable large-scale and high-throughput QA analytics for enabling improvements to the production quality of geodetic data products.

  11. Large deviations for Gaussian processes in Hoelder norm

    International Nuclear Information System (INIS)

    Fatalov, V R

    2003-01-01

    Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields

  12. CVSgrab : Mining the History of Large Software Projects

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.

    2006-01-01

    Many software projects use Software Configuration Management systems to support their development process. Such systems accumulate in time large amounts of information useful for process accounting and auditing. We study how software developers can get insight in this information in order to

  13. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  14. Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes

    Directory of Open Access Journals (Sweden)

    Junichi Hirukawa

    2012-01-01

    Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.

  15. A KPI-based process monitoring and fault detection framework for large-scale processes.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Drell–Yan process at Large Hadron Collider

    Indian Academy of Sciences (India)

    the Drell–Yan process [1] first studied with muon final states. In Standard .... Two large-statistics sets of signal events, based on the value of the dimuon invariant mass, .... quality control criteria are applied to this globally reconstructed muon.

  17. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  18. Impacts of Large Amounts of Wind Power on Design and Operation of Power Systems; Results of IEA Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ela, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Holttinen, H. [VTT (Finland); Meibom, P. [DTU Riso (Denmark); Orths, A. [Energinet.dk (Denmark); O' Malley, M. [Univ. College Dubline (Ireland); Ummels, B. C. [Delft Univ. of Technology (Netherlands); Tande, J. [SINTEF (Norway); Estanqueiro, A. [INETI (Portugal); Gomez, E. [Univ. Castilla la Mancha (Spain); Smith, J. C. [Utility Wind Integration Group (UWIG), Reston, VA (United States)

    2008-06-01

    There are a multitude of studies completed and ongoing related to the cost of wind integration. However, the results are not easy to compare. An international forum for exchange of knowledge of power system impacts of wind power has been formed under the IEA Implementing Agreement on Wind Energy. IEA WIND R&D Task 25 on “Design and Operation of Power Systems with Large Amounts of Wind Power” produced a state-of-the-art report in October 2007, where the most relevant wind-power grid integration studies were analyzed, especially regarding methodologies and input data. This paper summarizes the results from 18 case studies, with discussion on differences in methodology as well as issues that have been identified to impact the cost of wind integration.

  19. Empirical relationships between tree fall and landscape-level amounts of logging and fire.

    Science.gov (United States)

    Lindenmayer, David B; Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape.

  20. Formation of Large-scale Coronal Loops Interconnecting Two Active Regions through Gradual Magnetic Reconnection and an Associated Heating Process

    Science.gov (United States)

    Du, Guohui; Chen, Yao; Zhu, Chunming; Liu, Chang; Ge, Lili; Wang, Bing; Li, Chuanyang; Wang, Haimin

    2018-06-01

    Coronal loops interconnecting two active regions (ARs), called interconnecting loops (ILs), are prominent large-scale structures in the solar atmosphere. They carry a significant amount of magnetic flux and therefore are considered to be an important element of the solar dynamo process. Earlier observations showed that eruptions of ILs are an important source of CMEs. It is generally believed that ILs are formed through magnetic reconnection in the high corona (>150″–200″), and several scenarios have been proposed to explain their brightening in soft X-rays (SXRs). However, the detailed IL formation process has not been fully explored, and the associated energy release in the corona still remains unresolved. Here, we report the complete formation process of a set of ILs connecting two nearby ARs, with successive observations by STEREO-A on the far side of the Sun and by SDO and Hinode on the Earth side. We conclude that ILs are formed by gradual reconnection high in the corona, in line with earlier postulations. In addition, we show evidence that ILs brighten in SXRs and EUVs through heating at or close to the reconnection site in the corona (i.e., through the direct heating process of reconnection), a process that has been largely overlooked in earlier studies of ILs.

  1. Influence of Ca amount on the synthesis of Nd{sub 2}Fe{sub 14}B particles in reduction–diffusion process

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chun-Qiang [Powder and Ceramics Division, Korea Institute of Materials Science, Changwon, Gyeongnam 642-831 (Korea, Republic of); Department of Advanced Materials, University of Science and Technology, Daejeon 305-350 (Korea, Republic of); Kim, Dongsoo, E-mail: dskim@kims.re.kr [Powder and Ceramics Division, Korea Institute of Materials Science, Changwon, Gyeongnam 642-831 (Korea, Republic of); Choi, Chuljin [Powder and Ceramics Division, Korea Institute of Materials Science, Changwon, Gyeongnam 642-831 (Korea, Republic of)

    2014-04-15

    Nd{sub 2}Fe{sub 14}B alloy particles with high coercivity of more than 10 kOe were successfully synthesized by adjusting the amount of Calcium (Ca) in reduction–diffusion (R–D) process. Calcium oxide (CaO) and unreacted Ca remained after R–D process in particles prepared by heat treatment in Hydrogen (H{sub 2}) atmosphere at previous step. In the ratio of 0.4 of Ca to powders (Ca/powders, wt%), residual Ca was not detected from X-ray diffraction pattern. On the other hand, Ca appeared above the ratio of 1.0 and below the ratio of 0.2, amount of Ca was not enough to reduce Nd oxide. Moreover, excess Ca affected magnetic property of final products obtained after washing, because residual Ca gave rise to evolution of H{sub 2} gas during disintegration with water and it led to the formation of Nd{sub 2}Fe{sub 14}BH{sub x} (x=1–5). Finally, Nd{sub 2}Fe{sub 14}B magnetic particles were synthesized after washing in de-ionized water with a mean size of 2 μm and their maximum energy product showed 15.5 MGOe. - Highlights: • We employ spray drying for the preparation of precursor powders. • Critical amount of Ca for the reduction of Nd-oxides is revealed quantitatively. • The formation of Nd{sub 2}Fe{sub 14}BH{sub x} (x=1–5) occur in the Ca to powder ratio above 2.0. • Final products after washing and drying show more than 10 MGOe of (BH){sub max}. • We propose the optimum amount of Ca in reduction–diffusion process.

  2. Informational support of the investment process in a large city economy

    Directory of Open Access Journals (Sweden)

    Tamara Zurabovna Chargazia

    2016-12-01

    Full Text Available Large cities possess a sufficient potential to participate in the investment processes both at the national and international levels. A potential investor’s awareness of the possibilities and prospects of a city development is of a great importance for him or her to make a decision. So, providing a potential investor with relevant, laconic and reliable information, the local authorities increase the intensity of the investment process in the city economy and vice-versa. As a hypothesis, there is a proposition that a large city administration can sufficiently activate the investment processes in the economy of a corresponding territorial entity using the tools of the information providing. The purpose of this article is to develop measures for the improvement of the investment portal of a large city as an important instrument of the information providing, which will make it possible to brisk up the investment processes at the level under analysis. The reasons of the unsatisfactory information providing on the investment process in a large city economy are deeply analyzed; the national and international experience in this sphere is studied; advantages and disadvantages of the information providing of the investment process in the economy of the city of Makeyevka are considered; the investment portals of different cities are compared. There are suggested technical approaches for improving the investment portal of a large city. The research results can be used to improve the investment policy of large cities.

  3. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  4. Risk of a large amount of high-radioactive contaminated water leaking into the reactor building basement of Fukushima Daiichi nuclear power station

    International Nuclear Information System (INIS)

    Ebisawa, Toru; Sawai, Masako

    2013-01-01

    In November 2012 about one and half year after the accident at units 1, 2 and 3 of Fukushima Daiichi nuclear power station, some 405 m 3 /day cooling water was being injected into the melt damaged core and leaked as highly-radioactive contaminated water from damaged lower part of containment into the basement of turbine hall. To treat a large amount of contaminated water in the basement, waste processing plant to remove cesium was installed in June 2011 with desalination plant, which produced clean water for circulating coolant system of damaged nuclear fuel while the rest went to storage. Radioactivity of contaminated water in the basement accumulated at initial almost 80 days of the accident was evaluated about 20% for Cs-137 of core inventory of units 1, 2 and 3 and 2.3% for Sr-90 of core inventory of units 2 and 3. Sr-90 from unit 1 was not released into the basement and almost remained at suppression chamber. By November 2012, Cs-137 released into the basement was evaluated to total about 40% of core inventory and stored contaminated water amounted to about 360 kilotons, while Cs-137 released into the atmosphere was estimated about 3.6% of core inventory with its one third contributed for land contamination. Sr-90 released into the basement was estimated as 6.3% or 4.4% of core inventory based on Sr-90 measured activity of treated water in December or September 2011 with stored contaminated water of 300 kilotons. Cs-137 and Sr-90 contaminated water kept continuously releasing into the basement as long as melt damaged core existed and cooling water washed out Cs-137 and Sr-90 attached on containment walls. Safe store of released radioactivity was highly important and acquired important data was recommended to publish for check and review. (T. Tanaka)

  5. Makespan estimation and order acceptance in batch process industries when processing times are uncertain

    NARCIS (Netherlands)

    Ivanescu, V.C.; Fransoo, J.C.; Bertrand, J.W.M.

    2002-01-01

    Batch process industries are characterized by complex precedence relationships between operations, which renders the estimation of an acceptable workload very difficult. A detailed schedule based model can be used for this purpose, but for large problems this may require a prohibitive large amount

  6. Automated analysis for large amount gaseous fission product gamma-scanning spectra from nuclear power plant and its data mining

    International Nuclear Information System (INIS)

    Weihua Zhang; Kurt Ungar; Ian Hoffman; Ryan Lawrie; Jarmo Ala-Heikkila

    2010-01-01

    Based on the Linssi database and UniSampo/Shaman software, an automated analysis platform has been setup for the analysis of large amounts of gamma-spectra from the primary coolant monitoring systems of a CANDU reactor. Thus, a database inventory of gaseous and volatile fission products in the primary coolant of a CANDU reactor has been established. This database is comprised of 15,000 spectra of radioisotope analysis records. Records from the database inventory were retrieved by a specifically designed data-mining module and subjected to further analysis. Results from the analysis were subsequently used to identify the reactor coolant half-life of 135 Xe and 133 Xe, as well as the correlations of 135 Xe and 88 Kr activities. (author)

  7. Large, but not small, antigens require time- and temperature-dependent processing in accessory cells before they can be recognized by T cells

    DEFF Research Database (Denmark)

    Buus, S; Werdelin, O

    1986-01-01

    We have studied if antigens of different size and structure all require processing in antigen-presenting cells of guinea-pigs before they can be recognized by T cells. The method of mild paraformaldehyde fixation was used to stop antigen-processing in the antigen-presenting cells. As a measure...... of antigen presentation we used the proliferative response of appropriately primed T cells during a co-culture with the paraformaldehyde-fixed and antigen-exposed presenting cells. We demonstrate that the large synthetic polypeptide antigen, dinitrophenyl-poly-L-lysine, requires processing. After an initial......-dependent and consequently energy-requiring. Processing is strongly inhibited by the lysosomotrophic drug, chloroquine, suggesting a lysosomal involvement in antigen processing. The existence of a minor, non-lysosomal pathway is suggested, since small amounts of antigen were processed even at 10 degrees C, at which...

  8. Storage process of large solid radioactive wastes

    International Nuclear Information System (INIS)

    Morin, Bruno; Thiery, Daniel.

    1976-01-01

    Process for the storage of large size solid radioactive waste, consisting of contaminated objects such as cartridge filters, metal swarf, tools, etc, whereby such waste is incorporated in a thermohardening resin at room temperature, after prior addition of at least one inert charge to the resin. Cross-linking of the resin is then brought about [fr

  9. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  10. Model reduction for the dynamics and control of large structural systems via neutral network processing direct numerical optimization

    Science.gov (United States)

    Becus, Georges A.; Chan, Alistair K.

    1993-01-01

    Three neural network processing approaches in a direct numerical optimization model reduction scheme are proposed and investigated. Large structural systems, such as large space structures, offer new challenges to both structural dynamicists and control engineers. One such challenge is that of dimensionality. Indeed these distributed parameter systems can be modeled either by infinite dimensional mathematical models (typically partial differential equations) or by high dimensional discrete models (typically finite element models) often exhibiting thousands of vibrational modes usually closely spaced and with little, if any, damping. Clearly, some form of model reduction is in order, especially for the control engineer who can actively control but a few of the modes using system identification based on a limited number of sensors. Inasmuch as the amount of 'control spillover' (in which the control inputs excite the neglected dynamics) and/or 'observation spillover' (where neglected dynamics affect system identification) is to a large extent determined by the choice of particular reduced model (RM), the way in which this model reduction is carried out is often critical.

  11. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  12. How restrained eaters perceive the amount they eat.

    Science.gov (United States)

    Jansen, A

    1996-09-01

    The cognitive model of binge eating states that it is the awareness of a broken diet that disinhibits the restrained eater. It is, according to that model, the perception of having overeaten that triggers disinhibited eating. However, although the perception of the amount eaten plays a central role in cognitive restraint theory, it has never directly been tested how restrained subjects perceive the amount of food they eat. In the present studies, participants were given ad libitum access to large amounts of palatable food and both their perception of the amount eaten and their estimated caloric intake were compared with the amount they actually ate. The restrained participants in these studies ate more than the unrestrained participants. In the first and second studies, the restrained participants consumed 571 and 372 'forbidden' calories respectively, without having the feeling that they had eaten very much, let alone too much. Moreover in both studies, the restrained eaters underestimated their caloric intake, whereas unrestrained eaters estimated their caloric intake quite well. The potential implications of the present findings for the cognitive restraint model are discussed.

  13. Synthesis of biodiesel from waste vegetable oil with large amounts of free fatty acids using a carbon-based solid acid catalyst

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Qing; Gao, Jixian; Nawaz, Zeeshan; Liao, Yuhui; Wang, Dezheng; Wang, Jinfu [Beijing Key Laboratory of Green Chemical Reaction Engineering and Technology, Department of Chemical Engineering, Tsinghua University, Beijing 100084 (China)

    2010-08-15

    A carbon-based solid acid catalyst was prepared by the sulfonation of carbonized vegetable oil asphalt. This catalyst was employed to simultaneously catalyze esterification and transesterification to synthesis biodiesel when a waste vegetable oil with large amounts of free fatty acids (FFAs) was used as feedstock. The physical and chemical properties of this catalyst were characterized by a variety of techniques. The maximum conversion of triglyceride and FFA reached 80.5 wt.% and 94.8 wt.% after 4.5 h at 220 C, when using a 16.8 M ratio of methanol to oil and 0.2 wt.% of catalyst to oil. The high catalytic activity and stability of this catalyst was related to its high acid site density (-OH, Broensted acid sites), hydrophobicity that prevented the hydration of -OH species, hydrophilic functional groups (-SO{sub 3}H) that gave improved accessibility of methanol to the triglyceride and FFAs, and large pores that provided more acid sites for the reactants. (author)

  14. Comparison Analysis among Large Amount of SNS Sites

    Science.gov (United States)

    Toriumi, Fujio; Yamamoto, Hitoshi; Suwa, Hirohiko; Okada, Isamu; Izumi, Kiyoshi; Hashimoto, Yasuhiro

    In recent years, application of Social Networking Services (SNS) and Blogs are growing as new communication tools on the Internet. Several large-scale SNS sites are prospering; meanwhile, many sites with relatively small scale are offering services. Such small-scale SNSs realize small-group isolated type of communication while neither mixi nor MySpace can do that. However, the studies on SNS are almost about particular large-scale SNSs and cannot analyze whether their results apply for general features or for special characteristics on the SNSs. From the point of view of comparison analysis on SNS, comparison with just several types of those cannot reach a statistically significant level. We analyze many SNS sites with the aim of classifying them by using some approaches. Our paper classifies 50,000 sites for small-scale SNSs and gives their features from the points of network structure, patterns of communication, and growth rate of SNS. The result of analysis for network structure shows that many SNS sites have small-world attribute with short path lengths and high coefficients of their cluster. Distribution of degrees of the SNS sites is close to power law. This result indicates the small-scale SNS sites raise the percentage of users with many friends than mixi. According to the analysis of their coefficients of assortativity, those SNS sites have negative values of assortativity, and that means users with high degree tend to connect users with small degree. Next, we analyze the patterns of user communication. A friend network of SNS is explicit while users' communication behaviors are defined as an implicit network. What kind of relationships do these networks have? To address this question, we obtain some characteristics of users' communication structure and activation patterns of users on the SNS sites. By using new indexes, friend aggregation rate and friend coverage rate, we show that SNS sites with high value of friend coverage rate activate diary postings

  15. Investigations on an environment friendly chemical reaction process (eco-chemistry). 2; Kankyo ni yasashii kagaku hanno process (eko chemistry) ni kansuru chosa. 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    In order to structure a chemical reaction process that does not discharge a large amount of waste by-products or harmful chemical substances, or so-called environment friendly process, investigations and discussions were given based on the results derived in the previous fiscal year. A proposal was made to reduce environmental load on development of oxidized and dehydrogenated catalysts that can produce selectively ethylene, propylene and isobutylene in an oxidation process. In liquid phase oxidation, redox-based oxidation and solid catalyzation of automatic oxidation reaction were enumerated. In acid base catalyst reaction, development of ultra strong solid acid was described to structure no pollution discharging process. In the fine chemical and pharmaceutical fields, the optical active substance method and the position-selective aromatics displacement reaction were evaluated to reduce environmental load. A questionnaire survey performed on major chemical corporations inside and outside the country revealed the following processes as the ones that can cause hidden environmental problems: processes discharging large amount of wastes, processes treating dangerous materials, and processes consuming large amount of energy. Development of catalysts is important that can realize high yield, high selectivity and reactions under mild conditions as a future environment harmonizing chemical process. 117 refs., 23 figs., 22 tabs.

  16. 75 FR 58407 - Medicare Program; Medicare Appeals; Adjustment to the Amount in Controversy Threshold Amounts for...

    Science.gov (United States)

    2010-09-24

    ... Administrative Law Judge (ALJ) hearings and judicial review under the Medicare appeals process. The adjustment to the AIC threshold amounts will be effective for requests for ALJ hearings and judicial review filed on... judicial review. DATES: Effective Date: This notice is effective on January 1, 2011. FOR FURTHER...

  17. 76 FR 59138 - Medicare Program; Medicare Appeals; Adjustment to the Amount in Controversy Threshold Amounts for...

    Science.gov (United States)

    2011-09-23

    ... Administrative Law Judge (ALJ) hearings and judicial review under the Medicare appeals process. The adjustment to the AIC threshold amounts will be effective for requests for ALJ hearings and judicial review filed on... $1,350 for judicial review. DATES: Effective Date: This notice is effective on January 1, 2012. FOR...

  18. 78 FR 59702 - Medicare Program; Medicare Appeals: Adjustment to the Amount in Controversy Threshold Amounts for...

    Science.gov (United States)

    2013-09-27

    ... Administrative Law Judge (ALJ) hearings and judicial review under the Medicare appeals process. The adjustment to the AIC threshold amounts will be effective for requests for ALJ hearings and judicial review filed on... ALJ hearings and $1,430 for judicial review. DATES: This notice is effective on January 1, 2014. FOR...

  19. 77 FR 59618 - Medicare Program; Medicare Appeals; Adjustment to the Amount in Controversy Threshold Amounts for...

    Science.gov (United States)

    2012-09-28

    ... Administrative Law Judge (ALJ) hearings and judicial review under the Medicare appeals process. The adjustment to the AIC threshold amounts will be effective for requests for ALJ hearings and judicial review filed on... $1,400 for judicial review. Effective Date: This notice is effective on January 1, 2013. FOR FURTHER...

  20. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  1. Effects of the amount and schedule of varied practice after constant practice on the adaptive process of motor learning

    Directory of Open Access Journals (Sweden)

    Umberto Cesar Corrêa

    2014-12-01

    Full Text Available This study investigated the effects of different amounts and schedules of varied practice, after constant practice, on the adaptive process of motor learning. Participants were one hundred and seven children with a mean age of 11.1 ± 0.9 years. Three experiments were carried out using a complex anticipatory timing task manipulating the following components in the varied practice: visual stimulus speed (experiment 1; sequential response pattern (experiment 2; and visual stimulus speed plus sequential response pattern (experiment 3. In all experiments the design involved three amounts (18, 36, and 63 trials, and two schedules (random and blocked of varied practice. The experiments also involved two learning phases: stabilization and adaptation. The dependent variables were the absolute, variable, and constant errors related to the task goal, and the relative timing of the sequential response. Results showed that all groups worsened the performances in the adaptation phase, and no difference was observed between them. Altogether, the results of the three experiments allow the conclusion that the amounts of trials manipulated in the random and blocked practices did not promote the diversification of the skill since no adaptation was observed.

  2. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  3. Effects of cooking process on the changes of concentration and total amount of radioactive caesium in beef, wild plants and fruits

    International Nuclear Information System (INIS)

    Nabeshi, Hiromi; Tsutsumi, Tomoaki; Uekusa, Yoshinori; Matsuda, Rieko; Akiyama, Hiroshi; Teshima, Reiko; Hachisuka, Akiko

    2016-01-01

    In order to obtain information about effects of the cooking process on the changes of concentration and amount of radioactive materials in foods, we determined the concentration of radioactive caesium in several foods such as beef, edible wild plants, blueberries and mushrooms, before and after cooking. Our results showed that drying after soaking in liquid seasoning and the removal of astringent taste were effective in removing radioactive caesium from foods. More than 80% of radioactive caesium could be removed by these cooking methods. These results suggest that cooking processes such as boiling and soaking in liquid seasoning or water are effective to remove radioactive caesium from foods. Moreover, appropriate food additives such as baking soda were useful to promote the removal of radioactive caesium from foods. On the other hand, simple drying, jam making, grilling and tempura cooking could not remove radioactive caesium from foods. In addition, we showed that the concentration of radioactive caesium in foods was raised after simple drying, although the amount of radioactive caesium was unchanged. It would be necessary to monitor radioactive caesium concentration in processed foods because they might have undergone dehydration by cooking, which could result in concentrations exceeding regulatory levels. (author)

  4. Nutrient digestibility of veal calves fed large amounts of different solid feeds during the first 80 days of fattening

    Directory of Open Access Journals (Sweden)

    Marta Brscic

    2014-10-01

    Full Text Available The study aimed at evaluating nutrients apparent digestibility in veal calves fed 3 feeding plans based on milk-replacer plus large amounts of solid feeds differing in their composition during the first 80 days of fattening. Twelve Polish Friesian male calves (70.6±1.9 kg were randomly assigned to one of the following feeding treatments: i milk-replacer plus corn grain (CG; ii milk-replacer plus 80:20 mixture (as fed basis of corn grain and wheat straw (CGS; and iii milk-replacer plus 72:20:8 mixture of corn grain, wheat straw and extruded soybean (CGSES. Calves received the same milk-replacer but the daily amount was restricted (96% for CGSES calves to balance dietary protein. Total dry matter intake from milk-replacer and solid feeds was similar among treatments, but CGSES calves showed better growth performance than CG ones. Calves were introduced into a metabolism stall (1/pen during week 9 of fattening for a 3- day adaptation period and a 4-day digestibility trial. Calves fed CG showed the greatest DM, NFC, and ash digestibility while CGSES calves showed the lowest CP digestibility. Haemoglobin concentrations measured at day 5, 31 and 80 were similar among feeding treatments and significantly decreased over time. In CGSES treatment, the combination of milkreplacer with solid feed closer to a complete diet for ruminants led to better calves’ growth performance. However, the reduced protein digestibility with CGSES indicates that protein quality becomes a key factor when formulating diets for veal calves using alternatives to dairy sources.

  5. RESOURCE SAVING TECHNOLOGICAL PROCESS OF LARGE-SIZE DIE THERMAL TREATMENT

    Directory of Open Access Journals (Sweden)

    L. A. Glazkov

    2009-01-01

    Full Text Available The given paper presents a development of a technological process pertaining to hardening large-size parts made of die steel. The proposed process applies a water-air mixture instead of a conventional hardening medium that is industrial oil.While developing this new technological process it has been necessary to solve the following problems: reduction of thermal treatment duration, reduction of power resource expense (natural gas and mineral oil, elimination of fire danger and increase of process ecological efficiency. 

  6. Valid knowledge for the professional design of large and complex design processes

    NARCIS (Netherlands)

    Aken, van J.E.

    2004-01-01

    The organization and planning of design processes, which we may regard as design process design, is an important issue. Especially for large and complex design-processes traditional approaches to process design may no longer suffice. The design literature gives quite some design process models. As

  7. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  8. Software for event oriented processing on multiprocessor systems

    International Nuclear Information System (INIS)

    Fischler, M.; Areti, H.; Biel, J.; Bracker, S.; Case, G.; Gaines, I.; Husby, D.; Nash, T.

    1984-08-01

    Computing intensive problems that require the processing of numerous essentially independent events are natural customers for large scale multi-microprocessor systems. This paper describes the software required to support users with such problems in a multiprocessor environment. It is based on experience with and development work aimed at processing very large amounts of high energy physics data

  9. Nonterrestrial material processing and manufacturing of large space systems

    Science.gov (United States)

    Von Tiesenhausen, G.

    1979-01-01

    Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.

  10. Development of Best Practices for Large-scale Data Management Infrastructure

    NARCIS (Netherlands)

    S. Stadtmüller; H.F. Mühleisen (Hannes); C. Bizer; M.L. Kersten (Martin); J.A. de Rijke (Arjen); F.E. Groffen (Fabian); Y. Zhang (Ying); G. Ladwig; A. Harth; M Trampus

    2012-01-01

    htmlabstractThe amount of available data for processing is constantly increasing and becomes more diverse. We collect our experiences on deploying large-scale data management tools on local-area clusters or cloud infrastructures and provide guidance to use these computing and storage

  11. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  12. MARVIN: Distributed reasoning over large-scale Semantic Web data

    NARCIS (Netherlands)

    Oren, E.; Kotoulas, S.; Anadiotis, G.; Siebes, R.M.; ten Teije, A.C.M.; van Harmelen, F.A.H.

    2009-01-01

    Many Semantic Web problems are difficult to solve through common divide-and-conquer strategies, since they are hard to partition. We present Marvin, a parallel and distributed platform for processing large amounts of RDF data, on a network of loosely coupled peers. We present our divide-conquer-swap

  13. QCD phenomenology of the large P/sub T/ processes

    International Nuclear Information System (INIS)

    Stroynowski, R.

    1979-11-01

    Quantum Chromodynamics (QCD) provides a framework for the possible high-accuracy calculations of the large-p/sub T/ processes. The description of the large-transverse-momentum phenomena is introduced in terms of the parton model, and the modifications expected from QCD are described by using as an example single-particle distributions. The present status of available data (π, K, p, p-bar, eta, particle ratios, beam ratios, direct photons, nuclear target dependence), the evidence for jets, and the future prospects are reviewed. 80 references, 33 figures, 3 tables

  14. Forest amount affects soybean productivity in Brazilian agricultural frontier

    Science.gov (United States)

    Rattis, L.; Brando, P. M.; Marques, E. Q.; Queiroz, N.; Silverio, D. V.; Macedo, M.; Coe, M. T.

    2017-12-01

    Over the past three decades, large tracts of tropical forests have been converted to crop and pasturelands across southern Amazonia, largely to meet the increasing worldwide demand for protein. As the world's population continue to grow and consume more protein per capita, forest conversion to grow more crops could be a potential solution to meet such demand. However, widespread deforestation is expected to negatively affect crop productivity via multiple pathways (e.g., thermal regulation, rainfall, local moisture, pest control, among others). To quantify how deforestation affects crop productivity, we modeled the relationship between forest amount and enhanced vegetation index (EVI—a proxy for crop productivity) during the soybean planting season across southern Amazonia. Our hypothesis that forest amount causes increased crop productivity received strong support. We found that the maximum MODIS-based EVI in soybean fields increased as a function of forest amount across three spatial-scales, 0.5 km, 1 km, 2 km, 5 km, 10 km, 15 km and 20 km. However, the strength of this relationship varied across years and with precipitation, but only at the local scale (e.g., 500 meters and 1 km radius). Our results highlight the importance of considering forests to design sustainable landscapes.

  15. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  17. Process variations in surface nano geometries manufacture on large area substrates

    DEFF Research Database (Denmark)

    Calaon, Matteo; Hansen, Hans Nørgaard; Tosello, Guido

    2014-01-01

    The need of transporting, treating and measuring increasingly smaller biomedical samples has pushed the integration of a far reaching number of nanofeatures over large substrates size in respect to the conventional processes working area windows. Dimensional stability of nano fabrication processe...

  18. Plasmonic Titania Photo catalysts Active under UV and Visible-Light Irradiation: Influence of Gold Amount, Size, and Shape

    International Nuclear Information System (INIS)

    Kowalska, E.; Rau, S.; Kowalska, E.; Kowalska, E.; Ohtani, B.

    2012-01-01

    Plasmonic titania photo catalysts were prepared by titania modification with gold by photo deposition. It was found that for smaller amount of deposited gold (≤ 0.1 wt%), anatase presence and large surface area were beneficial for efficient hydrogen evolution during methanol dehydrogenation. After testing twelve amounts of deposited gold on large rutile titania, the existence of three optima for 0.5, 2 and >6 wt% of gold was found during acetic acid degradation. Under visible light irradiation, in the case of small gold NPs deposited on fine anatase titania, the dependence of photo activity on gold amount was parabolic, and large gold amount (2 wt%), observable as an intensively coloured powder, caused photo activity decrease. While for large gold NPs deposited on large rutile titania, the dependence represented cascade increase, due to change of size and shape of deposited gold with its amount increase. It has been thought that spherical/hemispherical shape of gold NPs, in comparison with rod-like ones, is beneficial for higher level of photo activity under visible light irradiation. For all tested systems and regardless of deposited amount of gold, each rutile Au/TiO 2 photo catalyst of large gold and titania NPs exhibited much higher photo activity than anatase Au/TiO 2 of small gold and titania NPs

  19. GPU applications for data processing

    Energy Technology Data Exchange (ETDEWEB)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); Aleksandrov, Andrey [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); INFN sezione di Napoli, I-80125 Napoli (Italy); Tioukov, Valeri [INFN sezione di Napoli, I-80125 Napoli (Italy)

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  20. Broadband Reflective Coating Process for Large FUVOIR Mirrors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZeCoat Corporation will develop and demonstrate a set of revolutionary coating processes for making broadband reflective coatings suitable for very large mirrors (4+...

  1. Ultra trace analysis of PAHs by designing simple injection of large amounts of analytes through the sample reconcentration on SPME fiber after magnetic solid phase extraction.

    Science.gov (United States)

    Khodaee, Nader; Mehdinia, Ali; Esfandiarnejad, Reyhaneh; Jabbari, Ali

    2016-01-15

    A simple solventless injection method was introduced based on the using of a solid-phase microextraction (SPME) fiber for injection of large amounts of the analytes extracted by the magnetic solid phase extraction (MSPE) procedure. The resulted extract from MSPE procedure was loaded on a G-coated SPME fiber, and then the fiber was injected into the gas chromatography (GC) injection port. This method combines the advantages of exhaustive extraction property of MSPE and the solvent-less injection of SPME to improve the sensitivity of the analysis. In addition, the analytes were re-concentrated prior to inject into the gas chromatography (GC) inlet because of the organic solvent removing from the remaining extract of MSPE technique. Injection of the large amounts of analytes was made possible by using the introduced procedure. Fourteen polycyclic aromatic hydrocarbons (PAHs) with different volatility were used as model compounds to investigate the method performance for volatile and semi-volatile compounds. The introduced method resulted in the higher enhancement factors (5097-59376), lower detection limits (0.29-3.3pgmL(-1)), and higher sensitivity for the semi-volatile compounds compared with the conventional direct injection method. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. 21 CFR 101.12 - Reference amounts customarily consumed per eating occasion.

    Science.gov (United States)

    2010-04-01

    ...,3,4 Product category Reference amount Label statement 5 Cereals, dry instant 15 g _ cup (_ g...) (_ g); _ piece(s) (_ g) for large pieces (e.g., large shells or lasagna noodles) or 2 oz (56 g/visual... chow mein noodles 25 g _ cup(s) (_ g) Starches, e.g., cornstarch, potato starch, tapioca, etc. 10 g...

  3. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  4. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  5. Processing and properties of large grain (RE)BCO

    International Nuclear Information System (INIS)

    Cardwell, D.A.

    1998-01-01

    The potential of high temperature superconductors to generate large magnetic fields and to carry current with low power dissipation at 77 K is particularly attractive for a variety of permanent magnet applications. As a result large grain bulk (RE)-Ba-Cu-O ((RE)BCO) materials have been developed by melt process techniques in an attempt to fabricate practical materials for use in high field devices. This review outlines the current state of the art in this field of processing, including seeding requirements for the controlled fabrication of these materials, the origin of striking growth features such as the formation of a facet plane around the seed, platelet boundaries and (RE) 2 BaCuO 5 (RE-211) inclusions in the seeded melt grown microstructure. An observed variation in critical current density in large grain (RE)BCO samples is accounted for by Sm contamination of the material in the vicinity of the seed and with the development of a non-uniform growth morphology at ∼4 mm from the seed position. (RE)Ba 2 Cu 3 O 7-δ (RE-123) dendrites are observed to form and bro[en preferentially within the a/b plane of the lattice in this growth regime. Finally, trapped fields in excess of 3 T have been reported in irr[iated U-doped YBCO and (RE) 1+x Ba 2-x Cu 3 O y (RE=Sm, Nd) materials have been observed to carry transport current in fields of up to 10 T at 77 K. This underlines the potential of bulk (RE)BCO materials for practical permanent magnet type applications. (orig.)

  6. Data-Intensive Text Processing with MapReduce

    CERN Document Server

    Lin, Jimmy

    2010-01-01

    Our world is being revolutionized by data-driven methods: access to large amounts of data has generated new insights and opened exciting new opportunities in commerce, science, and computing applications. Processing the enormous quantities of data necessary for these advances requires large clusters, making distributed computing paradigms more crucial than ever. MapReduce is a programming model for expressing distributed computations on massive datasets and an execution framework for large-scale data processing on clusters of commodity servers. The programming model provides an easy-to-underst

  7. How Reservoirs Alter DOM Amount and Composition: Sources, Sinks, and Transformations

    Science.gov (United States)

    Kraus, T. E.; Bergamaschi, B. A.; Hernes, P. J.; Doctor, D. H.; Kendall, C.; Losee, R. F.; Downing, B. D.

    2011-12-01

    Reservoirs are critical components of many water supply systems as they allow the storage of water when supply exceeds demand. However, during water storage biogeochemical processes can alter both the amount and composition of dissolved organic matter (DOM), which can in turn affect water quality. While the balance between production and loss determines whether a reservoir is a net sink or source of DOM, changes in chemical composition are also relevant as they affect DOM reactivity (e.g. persistence in the environment, removability during coagulation treatment, and potential to form toxic compounds during drinking water treatment). The composition of the DOM pool also provides information about the DOM sources and processing, which can inform reservoir management. We examined the concentration and composition of DOM in San Luis Reservoir (SLR), a large off-stream impoundment of the California State Water Project. We used an array of DOM chemical tracers including dissolved organic carbon (DOC) concentration, optical properties, isotopic composition, lignin phenol content, and structural groupings determined by 13C NMR. There were periods when the reservoir was i) a net source of DOM due to the predominance of algal production (summer), ii) a net sink due to the predominance of degradation (fall/winter), and iii) balanced between production and consumption (spring). Despite only moderate variation in bulk DOC concentration (3.0-3.6 mg C/L), substantial changes in DOM composition indicated that terrestrial-derived material entering the reservoir was being degraded and replaced by aquatic-derived DOM produced within the reservoir. Results suggest reservoirs have the potential to reduce DOM amount and reactivity via degradative processes, however, these benefits can be decreased or even negated by the production of algal-derived DOM.

  8. 29 CFR 4219.14 - Amount of liability for 20-year-limitation amounts.

    Science.gov (United States)

    2010-07-01

    ... amount equal to the present value of all initial withdrawal liability payments for which the employer was not liable pursuant to section 4219(c)(1)(B) of ERISA. The present value of such payments shall be... 29 Labor 9 2010-07-01 2010-07-01 false Amount of liability for 20-year-limitation amounts. 4219.14...

  9. Constructing large scale SCI-based processing systems by switch elements

    International Nuclear Information System (INIS)

    Wu, B.; Kristiansen, E.; Skaali, B.; Bogaerts, A.; Divia, R.; Mueller, H.

    1993-05-01

    The goal of this paper is to study some of the design criteria for the switch elements to form the interconnection of large scale SCI-based processing systems. The approved IEEE standard 1596 makes it possible to couple up to 64K nodes together. In order to connect thousands of nodes to construct large scale SCI-based processing systems, one has to interconnect these nodes by switch elements to form different topologies. A summary of the requirements and key points of interconnection networks and switches is presented. Two models of the SCI switch elements are proposed. The authors investigate several examples of systems constructed for 4-switches with simulations and the results are analyzed. Some issues and enhancements are discussed to provide the ideas behind the switch design that can improve performance and reduce latency. 29 refs., 11 figs., 3 tabs

  10. Feasibility of large volume casting cementation process for intermediate level radioactive waste

    International Nuclear Information System (INIS)

    Chen Zhuying; Chen Baisong; Zeng Jishu; Yu Chengze

    1988-01-01

    The recent tendency of radioactive waste treatment and disposal both in China and abroad is reviewed. The feasibility of the large volume casting cementation process for treating and disposing the intermediate level radioactive waste from spent fuel reprocessing plant in shallow land is assessed on the basis of the analyses of the experimental results (such as formulation study, solidified radioactive waste properties measurement ect.). It can be concluded large volume casting cementation process is a promising, safe and economic process. It is feasible to dispose the intermediate level radioactive waste from reprocessing plant it the disposal site chosen has resonable geological and geographical conditions and some additional effective protection means are taken

  11. Large-scale membrane transfer process: its application to single-crystal-silicon continuous membrane deformable mirror

    International Nuclear Information System (INIS)

    Wu, Tong; Sasaki, Takashi; Hane, Kazuhiro; Akiyama, Masayuki

    2013-01-01

    This paper describes a large-scale membrane transfer process developed for the construction of large-scale membrane devices via the transfer of continuous single-crystal-silicon membranes from one substrate to another. This technique is applied for fabricating a large stroke deformable mirror. A bimorph spring array is used to generate a large air gap between the mirror membrane and the electrode. A 1.9 mm × 1.9 mm × 2 µm single-crystal-silicon membrane is successfully transferred to the electrode substrate by Au–Si eutectic bonding and the subsequent all-dry release process. This process provides an effective approach for transferring a free-standing large continuous single-crystal-silicon to a flexible suspension spring array with a large air gap. (paper)

  12. Recycling process of Mn-Al doped large grain UO2 pellets

    International Nuclear Information System (INIS)

    Nam, Ik Hui; Yang, Jae Ho; Rhee, Young Woo; Kim, Dong Joo; Kim, Jong Hun; Kim, Keon Sik; Song, Kun Woo

    2010-01-01

    To reduce the fuel cycle costs and the total mass of spent light water reactor (LWR) fuels, it is necessary to extend the fuel discharged burn-up. Research on fuel pellets focuses on increasing the pellet density and grain size to increase the uranium contents and the high burnup safety margins for LWRs. KAERI are developing the large grain UO 2 pellet for the same purpose. Small amount of additives doping technology are used to increase the grain size and the high temperature deformation of UO 2 pellets. Various promising additive candidates had been developed during the last 3 years and the MnO-Al 2 O 3 doped UO 2 fuel pellet is one of the most promising candidates. In a commercial UO 2 fuel pellet manufacturing process, defective UO 2 pellets or scraps are produced and those should be reused. A common recycling method for defective UO 2 pellets or scraps is that they are oxidized in air at about 450 .deg. C to make U 3 O 8 powder and then added to UO 2 powder. In the oxidation of a UO 2 pellet, the oxygen propagates along the grain boundary. The U 3 O 8 formation on the grain boundary causes a spallation of the grains. So, size and shape of U 3 O 8 powder deeply depend on the initial grain size of UO 2 pellets. In the case of Mn-Al doped large grain pellets, the average grain size is about 45μm and about 5 times larger than a typical un-doped UO 2 pellet which has grain size of about 8∼10μm. That big difference in grain size is expected to cause a big difference in recycled U 3 O 8 powder morphology. Addition of U 3 O 8 to UO 2 leads to a drop in the pellet density, impeding a grain growth and the formation of graph- like pore segregates. Such degradation of the UO 2 pellet properties by adding the recycled U 3 O 8 powder depend on the U 3 O 8 powder properties. So, it is necessary to understand the property and its effect on the pellet of the recycled U 3 O 8 . This paper shows a preliminary result about the recycled U 3 O 8 powder which was obtained by

  13. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled

  14. Process γ*γ → σ at large virtuality of γ*

    International Nuclear Information System (INIS)

    Volkov, M.K.; Radzhabov, A.E.; Yudichev, V.L.

    2004-01-01

    The process γ*γ → σ is investigated in the framework of the SU(2) x SU(2) chiral NJL model, where γ*γ are photons with the large and small virtuality, respectively, and σ is a pseudoscalar meson. The form factor of the process is derived for arbitrary virtuality of γ* in the Euclidean kinematic domain. The asymptotic behavior of this form factor resembles the asymptotic behavior of the γ*γ → π form factor [ru

  15. The testing of thermal-mechanical-hydrological-chemical processes using a large block

    International Nuclear Information System (INIS)

    Lin, W.; Wilder, D.G.; Blink, J.A.; Blair, S.C.; Buscheck, T.A.; Chesnut, D.A.; Glassley, W.E.; Lee, K.; Roberts, J.J.

    1994-01-01

    The radioactive decay heat from nuclear waste packages may, depending on the thermal load, create coupled thermal-mechanical-hydrological-chemical (TMHC) processes in the near-field environment of a repository. A group of tests on a large block (LBT) are planned to provide a timely opportunity to test and calibrate some of the TMHC model concepts. The LBT is advantageous for testing and verifying model concepts because the boundary conditions are controlled, and the block can be characterized before and after the experiment. A block of Topopah Spring tuff of about 3 x 3 x 4.5 m will be sawed and isolated at Fran Ridge, Nevada Test Site. Small blocks of the rock adjacent to the large block will be collected for laboratory testing of some individual thermal-mechanical, hydrological, and chemical processes. A constant load of about 4 MPa will be applied to the top and sides of the large block. The sides will be sealed with moisture and thermal barriers. The large block will be heated with one heater in each borehole and guard heaters on the sides so that a dry-out zone and a condensate zone will exist simultaneously. Temperature, moisture content, pore pressure, chemical composition, stress and displacement will be measured throughout the block during the heating and cool-down phases. The results from the experiments on small blocks and the tests on the large block will provide a better understanding of some concepts of the coupled TMHC processes

  16. Unusual amount of (-)-mesquitol from the heartwood of Prosopis juliflora.

    Science.gov (United States)

    Sirmah, Peter; Dumarçay, Stéphane; Masson, Eric; Gérardin, Philippe

    2009-01-01

    A large amount of flavonoid has been extracted and isolated from the heartwood of Prosopis juliflora, an exogenous wood species of Kenya. Structural and physicochemical elucidation based on FTIR, (1)H and (13)C NMR, GC-MS and HPLC analysis clearly demonstrated the presence of (-)-mesquitol as the sole compound without any noticeable impurities. The product was able to slow down oxidation of methyl linoleate induced by AIBN. The important amount and high purity of (-)-mesquitol present in the acetonic extract of P. juliflora could therefore be of valuable interest as a potential source of antioxidants from a renewable origin.

  17. CT and angiographic analysis of posterior communicating artery aneurysms: What factors influence the amount of subarachnoid blood?

    International Nuclear Information System (INIS)

    Kim, Young Min; Jung, Kun Sik; Rho, Myung Ho; Choi, Pil Youb; Sung, Young Soon; Kwon, Jae Soo; Lee, Sang Wook

    1998-01-01

    To determine how clinical and angiographic factors relate to the amount of subarachnoid blood detected by computerized tomography in patients with a ruptured aneurysm. Between January 1996 and December 1997, 22 patients with a posterior communicating artery aneurysm were retrospectively evaluated. Oval(three of four cases), funnel(both cases), and daughter-sac (four of five cases) types of aneurysmal sac were found among the 13 patients with a large amount of subarachnoid blood;eight of these had a past history of hyertension or diabetes. Seven of eleven cases of cylindricaltype aneurysmal sac were found among the 9 patients with a small amount of sularachnoid blood;eight of these had no past history of hypertension or diabetes. The average S/N ratio (ratio of maximum sac length to neck diameter) of patients with a small amount of blood was higher than that of patients with a large amount of blood(2.72 vs 2.07). Although many factors influence the amount of subarachnoid blood in an aneurysmal rupture, we found that a large amount of blood was frequently present in the oval, funnel and daughter sac types of aneurysm, when S/N ratio was low, and when an underlying disease such as hypertension or diabetes was present. Conversely, a small amount of blood was present in the cylindrical type, when S/N ratio was high, and where there was no underlying disease.=20

  18. Research on Fault Prediction of Distribution Network Based on Large Data

    Directory of Open Access Journals (Sweden)

    Jinglong Zhou

    2017-01-01

    Full Text Available With the continuous development of information technology and the improvement of distribution automation level. Especially, the amount of on-line monitoring and statistical data is increasing, and large data is used data distribution system, describes the technology to collect, data analysis and data processing of the data distribution system. The artificial neural network mining algorithm and the large data are researched in the fault diagnosis and prediction of the distribution network.

  19. Semantic orchestration of image processing services for environmental analysis

    Science.gov (United States)

    Ranisavljević, Élisabeth; Devin, Florent; Laffly, Dominique; Le Nir, Yannick

    2013-09-01

    In order to analyze environmental dynamics, a major process is the classification of the different phenomena of the site (e.g. ice and snow for a glacier). When using in situ pictures, this classification requires data pre-processing. Not all the pictures need the same sequence of processes depending on the disturbances. Until now, these sequences have been done manually, which restricts the processing of large amount of data. In this paper, we present how to realize a semantic orchestration to automate the sequencing for the analysis. It combines two advantages: solving the problem of the amount of processing, and diversifying the possibilities in the data processing. We define a BPEL description to express the sequences. This BPEL uses some web services to run the data processing. Each web service is semantically annotated using an ontology of image processing. The dynamic modification of the BPEL is done using SPARQL queries on these annotated web services. The results obtained by a prototype implementing this method validate the construction of the different workflows that can be applied to a large number of pictures.

  20. Processing and properties of large-sized ceramic slabs

    Energy Technology Data Exchange (ETDEWEB)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-07-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm{sup 2} and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  1. Processing and properties of large-sized ceramic slabs

    International Nuclear Information System (INIS)

    Raimondo, M.; Dondi, M.; Zanelli, C.; Guarini, G.; Gozzi, A.; Marani, F.; Fossa, L.

    2010-01-01

    Large-sized ceramic slabs with dimensions up to 360x120 cm 2 and thickness down to 2 mm are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites). Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD) and microstructural (SEM) viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated facades, tunnel coverings, insulating panelling), indoor furnitures (table tops, doors), support for photovoltaic ceramic panels. (Author) 24 refs.

  2. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Estimation of the Required Amount of Superconductors for High-field Accelerator Dipole Magnets

    CERN Document Server

    Schwerg, N

    2007-01-01

    The coil size and the corresponding amount of superconducting material that is used during the design process of a magnet cross-section have direct impacts on the overall magnet cost. It is therefore of interest to estimate the minimum amount of conductors needed to reach the defined field strength before a detailed design process starts. Equally, it is useful to evaluate the efficiency of a given design by calculating the amount of superconducting cables that are used to reach the envisaged main field by simple rule. To this purpose, the minimum amount of conductors for the construction of a dipole of given main field strength and aperture size is estimated taking the actual critical current density of the used strands into account. Characteristic curves applicable for the NED Nb$_{3}$Sn strand specification are given and some of the recently studied different dipole configurations are compared. Based on these results, it is shown how the required amount of conductors changes due to the iron yoke contributio...

  4. Control system for technological processes in tritium processing plants with process analysis

    International Nuclear Information System (INIS)

    Retevoi, Carmen Maria; Stefan, Iuliana; Balteanu, Ovidiu; Stefan, Liviu; Bucur, Ciprian

    2005-01-01

    Integration of a large variety of installations and equipment into a unitary system for controlling the technological process in tritium processing nuclear facilities appears to be a rather complex approach particularly when experimental or new technologies are developed. Ensuring a high degree of versatility allowing easy modifications in configurations and process parameters is a major requirement imposed on experimental installations. The large amount of data which must be processed, stored and easily accessed for subsequent analyses imposes development of a large information network based on a highly integrated system containing the acquisition, control and technological process analysis data as well as data base system. On such a basis integrated systems of computation and control able to conduct the technological process could be developed as well protection systems for cases of failures or break down. The integrated system responds to the control and security requirements in case of emergency and of the technological processes specific to the industry that processes radioactive or toxic substances with severe consequences in case of technological failure as in the case of tritium processing nuclear plant. In order to lower the risk technological failure of these processes an integrated software, data base and process analysis system are developed, which, based on identification algorithm of the important parameters for protection and security systems, will display the process evolution trend. The system was checked on a existing plant that includes a removal tritium unit, finally used in a nuclear power plant, by simulating the failure events as well as the process. The system will also include a complete data base monitoring all the parameters and a process analysis software for the main modules of the tritium processing plant, namely, isotope separation, catalytic purification and cryogenic distillation

  5. Measuring the Amount of Mechanical Vibration During Lathe Processing

    Directory of Open Access Journals (Sweden)

    Štefánia SALOKYOVÁ

    2015-06-01

    Full Text Available The article provides basic information regarding the measurement and evaluation of mechanical vibration during the processing of material by lathe work. The lathe processing can be characterized as removing material by precisely defined tools. The results of the experimental part are values of the vibration acceleration amplitude measured by the piezoelectric sensor on the bearing house of the lathe. A set of new knowledge and conclusions is formulated based on the analysis of the created graphical dependencies.

  6. Elephant’s breast milk contains large amounts of glucosamine

    Science.gov (United States)

    TAKATSU, Zenta; TSUDA, Muneya; YAMADA, Akio; MATSUMOTO, Hiroshi; TAKAI, Akira; TAKEDA, Yasuhiro; TAKASE, Mitsunori

    2016-01-01

    Hand-reared elephant calves that are nursed with milk substitutes sometimes suffer bone fractures, probably due to problems associated with nutrition, exercise, sunshine levels and/or genetic factors. As we were expecting the birth of an Asian elephant (Elephas maximus), we analyzed elephant’s breast milk to improve the milk substitutes for elephant calves. Although there were few nutritional differences between conventional substitutes and elephant’s breast milk, we found a large unknown peak in the breast milk during high-performance liquid chromatography-based amino acid analysis and determined that it was glucosamine (GlcN) using liquid chromatography/mass spectrometry. We detected the following GlcN concentrations [mean ± SD] (mg/100 g) in milk hydrolysates produced by treating samples with 6M HCl for 24 hr at 110°C: four elephant’s breast milk samples: 516 ± 42, three cow’s milk mixtures: 4.0 ± 2.2, three mare’s milk samples: 12 ± 1.2 and two human milk samples: 38. The GlcN content of the elephant’s milk was 128, 43 and 14 times greater than those of the cow’s, mare’s and human milk, respectively. Then, we examined the degradation of GlcN during 0–24 hr hydrolyzation with HCl. We estimated that elephant’s milk contains >880 mg/100 g GlcN, which is similar to the levels of major amino acids in elephant’s milk. We concluded that a novel GlcN-containing milk substitute should be developed for elephant calves. The efficacy of GlcN supplements is disputed, and free GlcN is rare in bodily fluids; thus, the optimal molecular form of GlcN requires a further study. PMID:28049867

  7. A survey of formal business process verification : From soundness to variability

    NARCIS (Netherlands)

    Groefsema, Heerko; Bucur, Doina

    2013-01-01

    Formal verification of business process models is of interest to a number of application areas, including checking for basic process correctness, business compliance, and process variability. A large amount of work on these topics exist, while a comprehensive overview of the field and its directions

  8. Study of the influence of the amount of PBI-H{sub 3}PO{sub 4} in the catalytic layer of a high temperature PEMFC

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J.; Pinar, F. Javier [Chemical Engineering Department, Enrique Costa Building, University of Castilla-La Mancha, Av. Camilo Jose Cela, n 12, 13071, Ciudad Real (Spain)

    2010-02-15

    The influence of the amount of polybenzimidazole (PBI)-H{sub 3}PO{sub 4} (normalized with respect to the PBI loading, which expressed as C/PBI weight ratio) content in both the anode and cathode has been studied for a PBI-based high temperature proton exchange membrane (PEM) fuel cell. The electrodes prepared with different amounts of PBI have been characterized physically, by measuring the pore size distribution, and visualizing the surface microstructure. Afterwards, the electrochemical behaviour of the electrodes has been evaluated. The catalytic electrochemical activity has been measured by voltamperometry for each electrode prepared with a different PBI content, and the cell performance results have been studied, supported by the impedance spectra, in order to determine the influence of the PBI loading in each electrode. The best results have been achieved with a C/PBI weight ratio of 20, for both the anode and the cathode. A lower C/PBI weight ratio (larger amount of PBI in the catalytic layer) reduced the electrocatalytic activity, and impaired the mass transport processes, due to the large amount of polymer covering the catalyst particle, lowering the cell performance. A higher C/PBI weight ratio (lower amount of PBI in the catalytic layer) reduced the electrocatalytic activity, and slightly increased the ohmic resistance. The low amount of the polymeric ionic carrier PBI-H{sub 3}PO{sub 4} limited the proton mobility, despite of the presence of large amounts of ''free'' H{sub 3}PO{sub 4} in the catalytic layer. (author)

  9. Oxidative stability during storage of fish oil from filleting by-products of rainbow trout (Oncorhynchus mykiss) is largely independent of the processing and production temperature

    DEFF Research Database (Denmark)

    Honold, Philipp; Nouard, Marie-Louise; Jacobsen, Charlotte

    2016-01-01

    Rainbow trout (Oncorhynchus mykiss) is the main fish species produced in Danish fresh water farming. Large amounts of fileting by-products like heads, bones, tails (HBT), and intestines are produced when rainbow trout is processed to smoked rainbow trout filets. The filleting by-products can...... be used to produce high quality fish oil. In this study, the oxidative stability of fish oil produced from filleting by-products was evaluated. The oil was produced from conventional or organic fish (low and high omega-3 fatty acid content) at different temperatures (70 and 90°C). The oxidative stability...

  10. Elastin in large artery stiffness and hypertension

    Science.gov (United States)

    Wagenseil, Jessica E.; Mecham, Robert P.

    2012-01-01

    Large artery stiffness, as measured by pulse wave velocity (PWV), is correlated with high blood pressure and may be a causative factor in essential hypertension. The extracellular matrix components, specifically the mix of elastin and collagen in the vessel wall, determine the passive mechanical properties of the large arteries. Elastin is organized into elastic fibers in the wall during arterial development in a complex process that requires spatial and temporal coordination of numerous proteins. The elastic fibers last the lifetime of the organism, but are subject to proteolytic degradation and chemical alterations that change their mechanical properties. This review discusses how alterations in the amount, assembly, organization or chemical properties of the elastic fibers affect arterial stiffness and blood pressure. Strategies for encouraging or reversing alterations to the elastic fibers are addressed. Methods for determining the efficacy of these strategies, by measuring elastin amounts and arterial stiffness, are summarized. Therapies that have a direct effect on arterial stiffness through alterations to the elastic fibers in the wall may be an effective treatment for essential hypertension. PMID:22290157

  11. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    Science.gov (United States)

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  12. Processing graded feedback: electrophysiological correlates of learning from small and large errors.

    Science.gov (United States)

    Luft, Caroline Di Bernardi; Takase, Emilio; Bhattacharya, Joydeep

    2014-05-01

    Feedback processing is important for learning and therefore may affect the consolidation of skills. Considerable research demonstrates electrophysiological differences between correct and incorrect feedback, but how we learn from small versus large errors is usually overlooked. This study investigated electrophysiological differences when processing small or large error feedback during a time estimation task. Data from high-learners and low-learners were analyzed separately. In both high- and low-learners, large error feedback was associated with higher feedback-related negativity (FRN) and small error feedback was associated with a larger P300 and increased amplitude over the motor related areas of the left hemisphere. In addition, small error feedback induced larger desynchronization in the alpha and beta bands with distinctly different topographies between the two learning groups: The high-learners showed a more localized decrease in beta power over the left frontocentral areas, and the low-learners showed a widespread reduction in the alpha power following small error feedback. Furthermore, only the high-learners showed an increase in phase synchronization between the midfrontal and left central areas. Importantly, this synchronization was correlated to how well the participants consolidated the estimation of the time interval. Thus, although large errors were associated with higher FRN, small errors were associated with larger oscillatory responses, which was more evident in the high-learners. Altogether, our results suggest an important role of the motor areas in the processing of error feedback for skill consolidation.

  13. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  14. Automated sampling and data processing derived from biomimetic membranes

    Energy Technology Data Exchange (ETDEWEB)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H [Aquaporin A/S, Diplomvej 377, DK-2800 Kgs. Lyngby (Denmark); Boesen, T P [Xefion ApS, Kildegaardsvej 8C, DK-2900 Hellerup (Denmark); Emneus, J, E-mail: Claus.Nielsen@fysik.dtu.d [DTU Nanotech, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark)

    2009-12-15

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  15. Intensification of mass transfer in wet textile processes by power ultrasound

    NARCIS (Netherlands)

    Moholkar, V.S.; Nierstrasz, Vincent; Warmoeskerken, Marinus

    2003-01-01

    In industrial textile pre-treatment and finishing processes, mass transfer and mass transport are often rate-limiting. As a result, these processes require a relatively long residence time, large amounts of water and chemicals, and are also energy-consuming. In most of these processes, diffusion and

  16. Process automation system for integration and operation of Large Volume Plasma Device

    International Nuclear Information System (INIS)

    Sugandhi, R.; Srivastava, P.K.; Sanyasi, A.K.; Srivastav, Prabhakar; Awasthi, L.M.; Mattoo, S.K.

    2016-01-01

    Highlights: • Analysis and design of process automation system for Large Volume Plasma Device (LVPD). • Data flow modeling for process model development. • Modbus based data communication and interfacing. • Interface software development for subsystem control in LabVIEW. - Abstract: Large Volume Plasma Device (LVPD) has been successfully contributing towards understanding of the plasma turbulence driven by Electron Temperature Gradient (ETG), considered as a major contributor for the plasma loss in the fusion devices. Large size of the device imposes certain difficulties in the operation, such as access of the diagnostics, manual control of subsystems and large number of signals monitoring etc. To achieve integrated operation of the machine, automation is essential for the enhanced performance and operational efficiency. Recently, the machine is undergoing major upgradation for the new physics experiments. The new operation and control system consists of following: (1) PXIe based fast data acquisition system for the equipped diagnostics; (2) Modbus based Process Automation System (PAS) for the subsystem controls and (3) Data Utilization System (DUS) for efficient storage, processing and retrieval of the acquired data. In the ongoing development, data flow model of the machine’s operation has been developed. As a proof of concept, following two subsystems have been successfully integrated: (1) Filament Power Supply (FPS) for the heating of W- filaments based plasma source and (2) Probe Positioning System (PPS) for control of 12 number of linear probe drives for a travel length of 100 cm. The process model of the vacuum production system has been prepared and validated against acquired pressure data. In the next upgrade, all the subsystems of the machine will be integrated in a systematic manner. The automation backbone is based on 4-wire multi-drop serial interface (RS485) using Modbus communication protocol. Software is developed on LabVIEW platform using

  17. Process automation system for integration and operation of Large Volume Plasma Device

    Energy Technology Data Exchange (ETDEWEB)

    Sugandhi, R., E-mail: ritesh@ipr.res.in; Srivastava, P.K.; Sanyasi, A.K.; Srivastav, Prabhakar; Awasthi, L.M.; Mattoo, S.K.

    2016-11-15

    Highlights: • Analysis and design of process automation system for Large Volume Plasma Device (LVPD). • Data flow modeling for process model development. • Modbus based data communication and interfacing. • Interface software development for subsystem control in LabVIEW. - Abstract: Large Volume Plasma Device (LVPD) has been successfully contributing towards understanding of the plasma turbulence driven by Electron Temperature Gradient (ETG), considered as a major contributor for the plasma loss in the fusion devices. Large size of the device imposes certain difficulties in the operation, such as access of the diagnostics, manual control of subsystems and large number of signals monitoring etc. To achieve integrated operation of the machine, automation is essential for the enhanced performance and operational efficiency. Recently, the machine is undergoing major upgradation for the new physics experiments. The new operation and control system consists of following: (1) PXIe based fast data acquisition system for the equipped diagnostics; (2) Modbus based Process Automation System (PAS) for the subsystem controls and (3) Data Utilization System (DUS) for efficient storage, processing and retrieval of the acquired data. In the ongoing development, data flow model of the machine’s operation has been developed. As a proof of concept, following two subsystems have been successfully integrated: (1) Filament Power Supply (FPS) for the heating of W- filaments based plasma source and (2) Probe Positioning System (PPS) for control of 12 number of linear probe drives for a travel length of 100 cm. The process model of the vacuum production system has been prepared and validated against acquired pressure data. In the next upgrade, all the subsystems of the machine will be integrated in a systematic manner. The automation backbone is based on 4-wire multi-drop serial interface (RS485) using Modbus communication protocol. Software is developed on LabVIEW platform using

  18. On Building and Processing of Large Digitalized Map Archive

    Directory of Open Access Journals (Sweden)

    Milan Simunek

    2011-07-01

    Full Text Available A tall list of problems needs to be solved during a long-time work on a virtual model of Prague aim of which is to show historical development of the city in virtual reality. This paper presents an integrated solution to digitalizing, cataloguing and processing of a large number of maps from different periods and from variety of sources. A specialized (GIS software application was developed to allow for a fast georeferencing (using an evolutionary algorithm, for cataloguing in an internal database, and subsequently for an easy lookup of relevant maps. So the maps could be processed further to serve as a main input for a proper modeling of a changing face of the city through times.

  19. Large-scale methanol plants. [Based on Japanese-developed process

    Energy Technology Data Exchange (ETDEWEB)

    Tado, Y

    1978-02-01

    A study was made on how to produce methanol economically which is expected as a growth item for use as a material for pollution-free energy or for chemical use, centering on the following subjects: (1) Improvement of thermal economy, (2) Improvement of process, and (3) Problems of hardware attending the expansion of scale. The results of this study were already adopted in actual plants, obtaining good results, and large-scale methanol plants are going to be realized.

  20. Gastrointestinal absorption of large amounts of plutonium. Effect of valency state on transfer

    International Nuclear Information System (INIS)

    Lataillade, G.; Duserre, C.; Metivier, H.; Madic, C.; CEA Centre d'Etudes Nucleaires de Fontenay-aux-Roses, 92

    1989-01-01

    The gastrointestinal absorption of Pu, ingested in valency state III, IV, V or VI was studied in baboons. For each state, the absorption of Pu from masses ranging from 1 to 7 mg per kg of body weight was compared with that from masses ranging from 5 to 45 μg per kg of body weight. The mass ingested did not affect the gastrointestinal absorption of Pu(IV) or PU(VI), but for Pu(V) and to a lesser extent Pu(III), absorption clearly increased about 150-fold and 7-fold respectively, when large masses of Pu were ingested. When small masses were ingested, the valency state did not affect absorption. The increased Pu absorption observed after ingestion of large masses of Pu(V) or (III) might be due to the weak hydrolysis of these valency states. (author)

  1. Integrated process development-a robust, rapid method for inclusion body harvesting and processing at the microscale level.

    Science.gov (United States)

    Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid

    2017-10-21

    Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.

  2. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  3. Large Data at Small Universities: Astronomical processing using a computer classroom

    Science.gov (United States)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  4. Data-based method for creating electricity use load profiles using large amount of customer-specific hourly measured electricity use data

    International Nuclear Information System (INIS)

    Raesaenen, Teemu; Voukantsis, Dimitrios; Niska, Harri; Karatzas, Kostas; Kolehmainen, Mikko

    2010-01-01

    The recent technological developments monitoring the electricity use of small customers provides with a whole new view to develop electricity distribution systems, customer-specific services and to increase energy efficiency. The analysis of customer load profile and load estimation is an important and popular area of electricity distribution technology and management. In this paper, we present an efficient methodology, based on self-organizing maps (SOM) and clustering methods (K-means and hierarchical clustering), capable of handling large amounts of time-series data in the context of electricity load management research. The proposed methodology was applied on a dataset consisting of hourly measured electricity use data, for 3989 small customers located in Northern-Savo, Finland. Information for the hourly electricity use, for a large numbers of small customers, has been made available only recently. Therefore, this paper presents the first results of making use of these data. The individual customers were classified into user groups based on their electricity use profile. On this basis, new, data-based load curves were calculated for each of these user groups. The new user groups as well as the new-estimated load curves were compared with the existing ones, which were calculated by the electricity company, on the basis of a customer classification scheme and their annual demand for electricity. The index of agreement statistics were used to quantify the agreement between the estimated and observed electricity use. The results indicate that there is a clear improvement when using data-based estimations, while the new-estimated load curves can be utilized directly by existing electricity power systems for more accurate load estimates.

  5. Data-based method for creating electricity use load profiles using large amount of customer-specific hourly measured electricity use data

    Energy Technology Data Exchange (ETDEWEB)

    Raesaenen, Teemu; Niska, Harri; Kolehmainen, Mikko [Department of Environmental Sciences, University of Eastern Finland P.O. Box 1627, FIN-70211 Kuopio (Finland); Voukantsis, Dimitrios; Karatzas, Kostas [Department of Mechanical Engineering, Aristotle University of Thessaloniki, GR-54124 Thessaloniki (Greece)

    2010-11-15

    The recent technological developments monitoring the electricity use of small customers provides with a whole new view to develop electricity distribution systems, customer-specific services and to increase energy efficiency. The analysis of customer load profile and load estimation is an important and popular area of electricity distribution technology and management. In this paper, we present an efficient methodology, based on self-organizing maps (SOM) and clustering methods (K-means and hierarchical clustering), capable of handling large amounts of time-series data in the context of electricity load management research. The proposed methodology was applied on a dataset consisting of hourly measured electricity use data, for 3989 small customers located in Northern-Savo, Finland. Information for the hourly electricity use, for a large numbers of small customers, has been made available only recently. Therefore, this paper presents the first results of making use of these data. The individual customers were classified into user groups based on their electricity use profile. On this basis, new, data-based load curves were calculated for each of these user groups. The new user groups as well as the new-estimated load curves were compared with the existing ones, which were calculated by the electricity company, on the basis of a customer classification scheme and their annual demand for electricity. The index of agreement statistics were used to quantify the agreement between the estimated and observed electricity use. The results indicate that there is a clear improvement when using data-based estimations, while the new-estimated load curves can be utilized directly by existing electricity power systems for more accurate load estimates. (author)

  6. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  7. Dehydrogenation in large ingot casting process

    International Nuclear Information System (INIS)

    Ubukata, Takashi; Suzuki, Tadashi; Ueda, Sou; Shibata, Takashi

    2009-01-01

    Forging components (for nuclear power plants) have become larger and larger because of decreased weld lines from a safety point of view. Consequently they have been manufactured from ingots requirement for 200 tons or more. Dehydrogenation is one of the key issues for large ingot manufacturing process. In the case of ingots of 200 tons or heavier, mold stream degassing (MSD) has been applied for dehydrogenation. Although JSW had developed mold stream degassing by argon (MSD-Ar) as a more effective dehydrogenating practice, MSD-Ar was not applied for these ingots, because conventional refractory materials of a stopper rod for the Ar blowing hole had low durability. In this study, we have developed a new type of stopper rod through modification of both refractory materials and the stopper rod construction and have successfully expanded the application range of MSD-Ar up to ingots weighting 330 tons. Compared with the conventional MSD, the hydrogen content in ingots after MSD-Ar has decreased by 24 percent due to the dehydrogenation rate of MSD-Ar increased by 34 percent. (author)

  8. Estimation of erosion amount by geochemical characteristic in the Horonobe area, northern Hokkaido

    International Nuclear Information System (INIS)

    Takahashi, Kazuharu; Niizato, Tadafumi; Yasue, Ken-ichi; Ishii, Eiichi

    2005-08-01

    This article presents the results of the estimated amount of erosion and uplifting based on mineralogy and organic geochemical characters of the Neogene siliceous rock (Wakkanai and Koetoi Formations) in Horonobe. As a result of the transformational change of silica minerals, it was clarified that the erosion amount was about 0.66 [m ky -1 ] or more at the large uplift site, and about 0.21 [m ky -1 ] or more at the small uplift site at Hokushin region, Horonobe area. In this case of the correlation with the palaeo-geothermal temperature and the sterane/sterene ratio, the ratio is effective measure to estimate the burial depth and erosion amount. We think that the estimation of the amount of erosion and uplifting became possible in high resolution by the organic geochemical character. (author)

  9. Analyzing Vessel Behavior Using Process Mining

    NARCIS (Netherlands)

    Maggi, F.M.; Mooij, A.J.; Aalst, W.M.P. van der

    2013-01-01

    In the maritime domain, electronic sensors such as AIS receivers and radars collect large amounts of data about the vessels in a certain geographical area. We investigate the use of process mining techniques for analyzing the behavior of the vessels based on these data. In the context of maritime

  10. Laboratory for Large Data Research

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: The Laboratory for Large Data Research (LDR) addresses a critical need to rapidly prototype shared, unified access to large amounts of data across both the...

  11. Controlled elaboration of large-area plasmonic substrates by plasma process

    International Nuclear Information System (INIS)

    Pugliara, A; Despax, B; Makasheva, K; Bonafos, C; Carles, R

    2015-01-01

    Elaboration in a controlled way of large-area and efficient plasmonic substrates is achieved by combining sputtering of silver nanoparticles (AgNPs) and plasma polymerization of the embedding dielectric matrix in an axially asymmetric, capacitively coupled RF discharge maintained at low gas pressure. The plasma parameters and deposition conditions were optimized according to the optical response of these substrates. Structural and optical characterizations of the samples confirm the process efficiency. The obtained results indicate that to deposit a single layer of large and closely situated AgNPs, a high injected power and short sputtering times must be privileged. The plasma-elaborated plasmonic substrates appear to be very sensitive to any stimuli that affect their plasmonic response. (paper)

  12. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  13. Data processing system for NBT experiments

    International Nuclear Information System (INIS)

    Takahashi, C.; Hosokawa, M.; Shoji, T.; Fujiwara, M.

    1981-07-01

    Data processing system for Nagoya Bumpy Torus (NBT) has been developed. Since plasmas are produced and heated in steady state by use of high power microwaves, sampling and processing data prevails in long time scale on the order of one minute. The system, which consists of NOVA 3/12 minicomputer and many data acquisition devices, is designed to sample and process large amount of data before the next discharge starts. Several features of such long time scale data processing system are described in detail. (author)

  14. Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.

  15. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    Science.gov (United States)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    -core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  16. The new MAW scrap processing facility

    International Nuclear Information System (INIS)

    Kueppers, L.

    1994-01-01

    The shielded bunker for heat-generating waste attached to the MAW scrap processing cell will be modified and extended to comprise several MAW scrap processing cells of enhanced throughput capacity, and a new building to serve as an airlock and port for acceptance of large shipping casks (shipping cask airlock, TBS). The new facility is to process scrap from decommissioned nuclear installations, and in addition radwaste accrued at operating plants of utilities. This will allow efficient and steady use of the new MAW scrap processing facility. The planning activities for modification and extension are based on close coordination between KfK and the GNS mbH, in order to put structural dimensioning and capacity planning on a realistic basis in line with expected amounts of radwaste from operating nuclear installations of utilities. The paper indicates the currently available waste amount assessments covering solid radwaste (MAW) from the decommissioning of the WAK, MZFR, and KNK II, and existing waste amounts consisting of core internals of German nuclear power plant. The figures show that the MAW scrap processing facility will have to process an overall bulk of about 1100 Mg of solid waste over the next ten years to come. (orig./HP) [de

  17. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part II

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    In Part I, an efficient method for identifying faults in large processes was presented. The whole plant is divided into sectors by using structural, functional, or causal decomposition. A signed directed graph (SDG) is the model used for each sector. The SDG represents interactions among process variables. This qualitative model is used to carry out qualitative simulation for all possible faults. The output of this step is information about the process behaviour. This information is used to build rules. When a symptom is detected in one sector, its rules are evaluated using on-line data and fuzzy logic to yield the diagnosis. In this paper the proposed methodology is applied to a multiple stage flash (MSF) desalination process. This process is composed of sequential flash chambers. It was designed for a pilot plant that produces drinkable water for a community in Argentina; that is, it is a real case. Due to the large number of variables, recycles, phase changes, etc., this process is a good challenge for the proposed diagnosis method

  18. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  19. IAEA Conference on Large Radiation Sources in Industry (Warsaw 1959): Which technologies of radiation processing survived and why?

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1999-01-01

    The IAEA has organized in Warsaw an International Conference on Large Radiation Sources in Industry from 8 to 12 September 1959. Proceedings of the Conference have been published in two volumes of summary amount of 925 pages. This report presents analysis, which technologies presented at the Conference have survived and why. The analysis is interesting because already in the fifties practically full range of possibilities of radiation processing was explored, and partially implemented. Not many new technologies were presented at the next IAEA Conferences on the same theme. Already at the time of the Warsaw Conference an important role of economy of the technology has recognized. The present report selects the achievements of the Conference into two groups: the first concerns technologies which have not been implemented in the next decades and the second group which is the basis of highly profitable, unsubsidized commercial production. The criterion of belonging of the technology to the second group, is the value of the quotient of the cost of the ready, saleable product diminished by the cost of a raw material before processing, to the expense of radiation processing, being the sum of irradiation cost and such operations as transportation of the object to and from the irradiation facility. Low value of the quotient, as compared to successful technologies is prophesying badly as concerns the future of the commercial proposal. A special position among objects of radiation processing is occupied by radiation processing technologies direct towards the protection or improving of the environment. Market economy does not apply here and the implementation has to be subsidized. (author)

  20. The Development and Microstructure Analysis of High Strength Steel Plate NVE36 for Large Heat Input Welding

    Science.gov (United States)

    Peng, Zhang; Liangfa, Xie; Ming, Wei; Jianli, Li

    In the shipbuilding industry, the welding efficiency of the ship plate not only has a great effect on the construction cost of the ship, but also affects the construction speed and determines the delivery cycle. The steel plate used for large heat input welding was developed sufficiently. In this paper, the composition of the steel with a small amount of Nb, Ti and large amount of Mn had been designed in micro-alloyed route. The content of C and the carbon equivalent were also designed to a low level. The technology of oxide metallurgy was used during the smelting process of the steel. The rolling technology of TMCP was controlled at a low rolling temperature and ultra-fast cooling technology was used, for the purpose of controlling the transformation of the microstructure. The microstructure of the steel plate was controlled to be the mixed microstructure of low carbon bainite and ferrite. Large amount of oxide particles dispersed in the microstructure of steel, which had a positive effects on the mechanical property and welding performance of the steel. The mechanical property of the steel plate was excellent and the value of longitudinal Akv at -60 °C is more than 200 J. The toughness of WM and HAZ were excellent after the steel plate was welded with a large heat input of 100-250 kJ/cm. The steel plate processed by mentioned above can meet the requirement of large heat input welding.

  1. Medical students perceive better group learning processes when large classes are made to seem small.

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A; de Grave, Willem; Schuwirth, Lambert W T; Scherpbier, Albert J J A; Bos, Gerard M J

    2014-01-01

    Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50) as the intervention groups; a control group (n=102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Better group learning processes can be achieved in large medical schools by making large classes seem small.

  2. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  3. PLANNING QUALITY ASSURANCE PROCESSES IN A LARGE SCALE GEOGRAPHICALLY SPREAD HYBRID SOFTWARE DEVELOPMENT PROJECT

    Directory of Open Access Journals (Sweden)

    Святослав Аркадійович МУРАВЕЦЬКИЙ

    2016-02-01

    Full Text Available There have been discussed key points of operational activates in a large scale geographically spread software development projects. A look taken at required QA processes structure in such project. There have been given up to date methods of integration quality assurance processes into software development processes. There have been reviewed existing groups of software development methodologies. Such as sequential, agile and based on RPINCE2. There have been given a condensed overview of quality assurance processes in each group. There have been given a review of common challenges that sequential and agile models are having in case of large geographically spread hybrid software development project. Recommendations were given in order to tackle those challenges.  The conclusions about the best methodology choice and appliance to the particular project have been made.

  4. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    Science.gov (United States)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  5. Large break frequency for the SRS (Savannah River Site) production reactor process water system

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Bush, S.H.

    1989-01-01

    The objective of this paper is to present the results and conclusions of an evaluation of the large break frequency for the process water system (primary coolant system), including the piping, reactor tank, heat exchangers, expansion joints and other process water system components. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break. This evaluation encompasses three specific areas: the failure probability of large process water piping directly from imposed loads, the indirect failure probability of piping caused by the seismic-induced failure of surrounding structures, and the failure of all other process water components. The first two of these areas are discussed in detail in other papers. This paper primarily addresses the failure frequency of components other than piping, and includes the other two areas as contributions to the overall process water system break frequency

  6. Amounts of NPK removed from soil in harvested coffee berries as ...

    African Journals Online (AJOL)

    Monthly samples of ripened improved robusta coffee berries from compact and large growth forms from three locations, which are representative of the main ecological zones where coffee is grown in Ghana, were taken for 3 years. The pulp and parchment and beans were analysed for N, P and K contents. The amounts of ...

  7. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    Science.gov (United States)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  8. Thermal effects in radiation processing

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1985-01-01

    The balance of ionizing radiation energy incident on an object being processed is discussed in terms of energy losses, influencing the amount really absorbed. To obtain the amount of heat produced, the absorbed energy is corrected for the change in internal energy of the system and for the heat effect of secondary reactions developing after the initiation. The temperature of a processed object results from the heat evolved and from the specific heat of the material comprising the object. The csub(p) of most materials is usually much lower than that of aqueous systems and therefore temperatures after irradiation are higher. The role of low specific heat in radiation processing at cryogenic conditions is stressed. Adiabatic conditions of accelerator irradiation are contrasted with the steady state thermal conditions prevailing in large gamma sources. Among specific questions discussed in the last part of the paper are: intermediate and final temperature of composite materials, measurement of real thermal effects in situ, neutralization of undesired warming experienced during radiation processing, processing at temperatures other than ambient and administration of very high doses of radiation. (author)

  9. Thermal effects in radiation processing

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1984-01-01

    The balance of ionizing radiation energy incident on an object being processed is discussed in terms of energy losses, influencing the amount really absorbed. To obtain the amount of heat produced, the absorbed energy is corrected for the change in internal energy of the system and for the heat effect of secondary reactions developing after the initiation. The temperature of a processed object results from the heat evolved and from the specific heat of the material comprising the object. The specific heat of most materials is usually much lower than that of aqueous systems and therefore temperatures after irradiation are higher. The role of low specific heat in radiation processing at cryogenic conditions is stressed. Adiabatic conditions of accelerator irradiation are contrasted with the steady state thermal conditions prevailing in large gamma sources. Among specific questions discussed in the last part of the paper are: intermediate and final temperature of composite materials, measurement of real thermal effects in situ, neutralization of undesired warming experienced during radiation processing, processing at temperatures other than ambient and administration of very high doses of radiation

  10. Perspectives of intellectual processing of large volumes of astronomical data using neural networks

    Science.gov (United States)

    Gorbunov, A. A.; Isaev, E. A.; Samodurov, V. A.

    2018-01-01

    In the process of astronomical observations vast amounts of data are collected. BSA (Big Scanning Antenna) LPI used in the study of impulse phenomena, daily logs 87.5 GB of data (32 TB per year). This data has important implications for both short-and long-term monitoring of various classes of radio sources (including radio transients of different nature), monitoring the Earth’s ionosphere, the interplanetary and the interstellar plasma, the search and monitoring of different classes of radio sources. In the framework of the studies discovered 83096 individual pulse events (in the interval of the study highlighted July 2012 - October 2013), which may correspond to pulsars, twinkling springs, and a rapid radio transients. Detected impulse events are supposed to be used to filter subsequent observations. The study suggests approach, using the creation of the multilayered artificial neural network, which processes the input raw data and after processing, by the hidden layer, the output layer produces a class of impulsive phenomena.

  11. Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data

    Directory of Open Access Journals (Sweden)

    Y Kubota

    2013-03-01

    Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.

  12. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  13. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    Science.gov (United States)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  14. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A.; de Grave, Willem; Schuwirth, Lambert W. T.; Scherpbier, Albert J. J. A.; Bos, Gerard M. J.

    2014-01-01

    Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be

  15. ESB application for effective synchronization of large volume measurements data

    CERN Document Server

    Wyszkowski, Przemysław Michał

    2011-01-01

    The TOTEM experiment at CERN aims at measurement of total cross section, elastic scattering and diffractive processes of colliding protons in the Large Hadron Collider. In order for the research to be possible, it is necessary to process huge amounts of data coming from variety of sources: TOTEM detectors, CMS detectors, measurement devices around the Large Hadron Collider tunnel and many other external systems. Preparing final results involves also calculating plenty of intermediate figures, which also need to be stored. In order for the work of the scientist to be effective and convenient it is crucial to provide central point for the data storage, where all raw and intermediate figures will be stored. This thesis aims at presenting the usage of Enterprise Service Bus concept in building software infrastructure for transferring large volume of measurements data. Topics discussed here include technologies and mechanisms realizing the concept of integration bus, model of data transferring system based on ...

  16. Large-scale functional networks connect differently for processing words and symbol strings.

    Science.gov (United States)

    Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta

    2018-01-01

    Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.

  17. A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things

    OpenAIRE

    Wang, Yongheng; Cao, Kening

    2014-01-01

    The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...

  18. Processes with large Psub(T) in the quantum chromodynamics

    International Nuclear Information System (INIS)

    Slepchenko, L.A.

    1981-01-01

    Necessary data on deep inelastic processes and processes of hard collision of hadrons and their interpretation in QCD are stated. Low of power reduction of exclusive and inclusive cross sections at large transverse momenta, electromagnetic and inelastic (structural functions) formfactors of hadrons have been discussed. When searching for a method of taking account of QCD effects scaling disturbance was considered. It is shown that for the large transverse momenta the deep inelastic l-h scatterina is represented as the scattering with a compound system (hadron) in the pulse approximation. In an assumption of a parton model obtained was a hadron cross section calculated through a renormalized structural parton function was obtained. Proof of the factorization in the principal logarithmic approximation of QCD has been obtained by means of a quark-gluon diagram technique. The cross section of the hadron reaction in the factorized form, which is analogous to the l-h scattering, has been calculated. It is shown that a) the diagram summing with the gluon emission generates the scaling disturbance in renormalized structural functions (SF) of quarks and gluons and a running coupling constant arises simultaneously; b) the disturbance character of the Bjorken scaling of SF is the same as in the deep inelasic lepton scattering. QCD problems which can not be solved within the framework of the perturbation theory, are discussed. The evolution of SF describing the bound state of a hadron and the hadron light cone have been studied. Radiation corrections arising in two-loop and higher approximations have been evaluated. QCD corrections for point-similar power asymptotes of processes with high energies and transfers of momenta have been studied on the example of the inclusive production of quark and gluon jets. Rules of the quark counting of anomalous dimensionalities of QCD have been obtained. It is concluded that the considered limit of the inclusive cross sections is close to

  19. Influence of the reagent concentration of the colorimetric copper determination with sodium diethyl dithiocarbamate (abbreviated: D.D.C.) and its importance for the determination of copper in the presence of large amounts of iron

    NARCIS (Netherlands)

    Karsten, P.; Rademaker, S.C.; Walraven, J.J.

    1950-01-01

    From a research about the influence of the reagent concentration on the copper determination with sodium di-ethyl-di-thio-carbamate in the presence of large amounts of iron some insight was gained into factors which had never been examined so far and which were found to have great influence on the

  20. Hierarchical optimal control of large-scale nonlinear chemical processes.

    Science.gov (United States)

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  1. Possible implications of large scale radiation processing of food

    International Nuclear Information System (INIS)

    Zagorski, Z.P.

    1990-01-01

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of successful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fulfilment of conditions for successful processing is observed in the group of dry food, in expensive spices in particular. (author)

  2. Possible implications of large scale radiation processing of food

    Science.gov (United States)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  3. Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure

    Science.gov (United States)

    Oefelein, Joseph C.

    2002-01-01

    This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.

  4. The key network communication technology in large radiation image cooperative process system

    International Nuclear Information System (INIS)

    Li Zheng; Kang Kejun; Gao Wenhuan; Wang Jingjin

    1998-01-01

    Large container inspection system (LCIS) based on radiation imaging technology is a powerful tool for the customs to check the contents inside a large container without opening it. An image distributed network system is composed of operation manager station, image acquisition station, environment control station, inspection processing station, check-in station, check-out station, database station by using advanced network technology. Mass data, such as container image data, container general information, manifest scanning data, commands and status, must be on-line transferred between different stations. Advanced network communication technology is presented

  5. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    Science.gov (United States)

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P

    2015-01-01

    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  6. Response of deep and shallow tropical maritime cumuli to large-scale processes

    Science.gov (United States)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  7. Large scale production and downstream processing of a recombinant porcine parvovirus vaccine

    NARCIS (Netherlands)

    Maranga, L.; Rueda, P.; Antonis, A.F.G.; Vela, C.; Langeveld, J.P.M.; Casal, J.I.; Carrondo, M.J.T.

    2002-01-01

    Porcine parvovirus (PPV) virus-like particles (VLPs) constitute a potential vaccine for prevention of parvovirus-induced reproductive failure in gilts. Here we report the development of a large scale (25 l) production process for PPV-VLPs with baculovirus-infected insect cells. A low multiplicity of

  8. Dynamic Thermodynamics with Internal Energy, Volume, and Amount of Moles as States : Application to Liquefied Gas Tank

    NARCIS (Netherlands)

    Arendsen, A. R. J.; Versteeg, G. F.

    2009-01-01

    Dynamic models for process design, optimization, and control usually solve a set of heat and/or mass balances as a function of time and/or position in the process. To obtain more robust dynamic models and to minimize the amount of assumptions, internal energy, volume, and amount of moles are chosen

  9. Saturation transfer EPR (ST-EPR) for dating biocarbonates containing large amount of Mn2+: separation of SO3- and CO2- lines and geochronology of Brazilian fish fossil

    International Nuclear Information System (INIS)

    Sastry, M.D.; Andrade, M.B.; Watanabe, Shigueo

    2003-01-01

    A method using saturation transfer EPR (ST-EPR) is shown to be feasible for detecting EPR signal of radiation-induced defects in biocarbonates containing large amount of Mn 2+ . The ST-EPR measurements conducted at room temperature on fish fossil of Brazilian origin, enabled the identification of CO 2 - and SO 3 - radical ions, by partially suppressing the intense signal from Mn 2+ when the signal are detected 90 deg. out of phase with magnetic field modulating signal and at high microwave power (50 mW). Using these signals the age of fish fossil is estimated to be (36±5) Ma

  10. Front-end data processing the SLD data acquisition system

    International Nuclear Information System (INIS)

    Nielsen, B.S.

    1986-07-01

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  11. Big Data Analysis of Manufacturing Processes

    Science.gov (United States)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  12. Big Data Analysis of Manufacturing Processes

    International Nuclear Information System (INIS)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-01-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results. (paper)

  13. Increasing a large petrochemical company efficiency by improvement of decision making process

    OpenAIRE

    Kirin Snežana D.; Nešić Lela G.

    2010-01-01

    The paper shows the results of a research conducted in a large petrochemical company, in a state under transition, with the aim to "shed light" on the decision making process from the aspect of personal characteristics of the employees, in order to use the results to improve decision making process and increase company efficiency. The research was conducted by a survey, i.e. by filling out a questionnaire specially made for this purpose, in real conditions, during working hours. The sample of...

  14. Design Methodology of Process Layout considering Various Equipment Types for Large scale Pyro processing Facility

    International Nuclear Information System (INIS)

    Yu, Seung Nam; Lee, Jong Kwang; Lee, Hyo Jik

    2016-01-01

    At present, each item of process equipment required for integrated processing is being examined, based on experience acquired during the Pyropocess Integrated Inactive Demonstration Facility (PRIDE) project, and considering the requirements and desired performance enhancement of KAPF as a new facility beyond PRIDE. Essentially, KAPF will be required to handle hazardous materials such as spent nuclear fuel, which must be processed in an isolated and shielded area separate from the operator location. Moreover, an inert-gas atmosphere must be maintained, because of the radiation and deliquescence of the materials. KAPF must also achieve the goal of significantly increased yearly production beyond that of the previous facility; therefore, several parts of the production line must be automated. This article presents the method considered for the conceptual design of both the production line and the overall layout of the KAPF process equipment. This study has proposed a design methodology that can be utilized as a preliminary step for the design of a hot-cell-type, large-scale facility, in which the various types of processing equipment operated by the remote handling system are integrated. The proposed methodology applies to part of the overall design procedure and contains various weaknesses. However, if the designer is required to maximize the efficiency of the installed material-handling system while considering operation restrictions and maintenance conditions, this kind of design process can accommodate the essential components that must be employed simultaneously in a general hot-cell system

  15. Quality Improvement Process in a Large Intensive Care Unit: Structure and Outcomes.

    Science.gov (United States)

    Reddy, Anita J; Guzman, Jorge A

    2016-11-01

    Quality improvement in the health care setting is a complex process, and even more so in the critical care environment. The development of intensive care unit process measures and quality improvement strategies are associated with improved outcomes, but should be individualized to each medical center as structure and culture can differ from institution to institution. The purpose of this report is to describe the structure of quality improvement processes within a large medical intensive care unit while using examples of the study institution's successes and challenges in the areas of stat antibiotic administration, reduction in blood product waste, central line-associated bloodstream infections, and medication errors. © The Author(s) 2015.

  16. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  17. Analysis of reforming process of large distorted ring in final enlarging forging

    International Nuclear Information System (INIS)

    Miyazawa, Takeshi; Murai, Etsuo

    2002-01-01

    In the construction of reactors or pressure vessels for oil chemical plants and nuclear power stations, mono block open-die forging rings are often utilized. Generally, a large forged ring is manufactured by means of enlarging forging with reductions of the wall thickness. During the enlarging process the circular ring is often distorted and becomes an ellipse in shape. However the shape control of the ring is a complicated work. This phenomenon makes the matter still worse in forging of larger rings. In order to make precision forging of large rings, we have developed the forging method using a v-shape anvil. The v-shape anvil is geometrically adjusted to fit the distorted ring in the final circle and reform automatically the shape of the ring during enlarging forging. This paper has analyzed the reforming process of distorted ring by computer program based on F.E.M. and examined the effect on the precision of ring forging. (author)

  18. Hydrothermal processes above the Yellowstone magma chamber: Large hydrothermal systems and large hydrothermal explosions

    Science.gov (United States)

    Morgan, L.A.; Shanks, W.C. Pat; Pierce, K.L.

    2009-01-01

    Hydrothermal explosions are violent and dramatic events resulting in the rapid ejection of boiling water, steam, mud, and rock fragments from source craters that range from a few meters up to more than 2 km in diameter; associated breccia can be emplaced as much as 3 to 4 km from the largest craters. Hydrothermal explosions occur where shallow interconnected reservoirs of steam- and liquid-saturated fluids with temperatures at or near the boiling curve underlie thermal fields. Sudden reduction in confi ning pressure causes fluids to fl ash to steam, resulting in signifi cant expansion, rock fragmentation, and debris ejection. In Yellowstone, hydrothermal explosions are a potentially signifi cant hazard for visitors and facilities and can damage or even destroy thermal features. The breccia deposits and associated craters formed from hydrothermal explosions are mapped as mostly Holocene (the Mary Bay deposit is older) units throughout Yellowstone National Park (YNP) and are spatially related to within the 0.64-Ma Yellowstone caldera and along the active Norris-Mammoth tectonic corridor. In Yellowstone, at least 20 large (>100 m in diameter) hydrothermal explosion craters have been identifi ed; the scale of the individual associated events dwarfs similar features in geothermal areas elsewhere in the world. Large hydrothermal explosions in Yellowstone have occurred over the past 16 ka averaging ??1 every 700 yr; similar events are likely in the future. Our studies of large hydrothermal explosion events indicate: (1) none are directly associated with eruptive volcanic or shallow intrusive events; (2) several historical explosions have been triggered by seismic events; (3) lithic clasts and comingled matrix material that form hydrothermal explosion deposits are extensively altered, indicating that explosions occur in areas subjected to intense hydrothermal processes; (4) many lithic clasts contained in explosion breccia deposits preserve evidence of repeated fracturing

  19. Methods to isolate a large amount of generative cells, sperm cells and vegetative nuclei from tomato pollen for "omics" analysis.

    Science.gov (United States)

    Lu, Yunlong; Wei, Liqin; Wang, Tai

    2015-01-01

    The development of sperm cells (SCs) from microspores involves a set of finely regulated molecular and cellular events and the coordination of these events. The mechanisms underlying these events and their interconnections remain a major challenge. Systems analysis of genome-wide molecular networks and functional modules with high-throughput "omics" approaches is crucial for understanding the mechanisms; however, this study is hindered because of the difficulty in isolating a large amount of cells of different types, especially generative cells (GCs), from the pollen. Here, we optimized the conditions of tomato pollen germination and pollen tube growth to allow for long-term growth of pollen tubes in vitro with SCs generated in the tube. Using this culture system, we developed methods for isolating GCs, SCs and vegetative cell nuclei (VN) from just-germinated tomato pollen grains and growing pollen tubes and their purification by Percoll density gradient centrifugation. The purity and viability of isolated GCs and SCs were confirmed by microscopy examination and fluorescein diacetate staining, respectively, and the integrity of VN was confirmed by propidium iodide staining. We could obtain about 1.5 million GCs and 2.0 million SCs each from 180 mg initiated pollen grains, and 10 million VN from 270 mg initiated pollen grains germinated in vitro in each experiment. These methods provide the necessary preconditions for systematic biology studies of SC development and differentiation in higher plants.

  20. 46 CFR 308.403 - Insured amounts.

    Science.gov (United States)

    2010-10-01

    ... total amount of war risk insurance obtainable from companies authorized to do an insurance business in a... MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Builder's Risk Insurance § 308.403 Insured amounts. (a) Prelaunching period. The amount insured during...

  1. Visual attention mitigates information loss in small- and large-scale neural codes

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  2. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  3. Data acquisition system issues for large experiments

    International Nuclear Information System (INIS)

    Siskind, E.J.

    2007-01-01

    This talk consists of personal observations on two classes of data acquisition ('DAQ') systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic 'lessons learned' recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of 'system-on-a-chip' ('SOC') or 'platform' field-programmable gate arrays ('FPGAs') in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed

  4. Development of the Fischer-Tropsch Process: From the Reaction Concept to the Process Book

    Directory of Open Access Journals (Sweden)

    Boyer C.

    2016-05-01

    Full Text Available The process development by IFP Energies nouvelles (IFPEN/ENI/Axens of a Fischer-Tropsch process is described. This development is based on upstream process studies to choose the process scheme, reactor technology and operating conditions, and downstream to summarize all development work in a process guide. A large amount of work was devoted to the catalyst performances on one hand and the scale-up of the slurry bubble reactor with dedicated complementary tools on the other hand. Finally, an original approach was implemented to validate both the process and catalyst on an industrial scale by combining a 20 bpd unit in ENI’s Sannazzaro refinery, with cold mock-ups equivalent to 20 and 1 000 bpd at IFPEN and a special “Large Validation Tool” (LVT which reproduces the combined effect of chemical reaction condition stress and mechanical stress equivalent to a 15 000 bpd industrial unit. Dedicated analytical techniques and a dedicated model were developed to simulate the whole process (reactor and separation train, integrating a high level of complexity and phenomena coupling to scale-up the process in a robust reliable base on an industrial scale.

  5. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    Science.gov (United States)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  6. An Extensible Processing Framework for Eddy-covariance Data

    Science.gov (United States)

    Durden, D.; Fox, A. M.; Metzger, S.; Sturtevant, C.; Durden, N. P.; Luo, H.

    2016-12-01

    The evolution of large data collecting networks has not only led to an increase of available information, but also in the complexity of analyzing the observations. Timely dissemination of readily usable data products necessitates a streaming processing framework that is both automatable and flexible. Tower networks, such as ICOS, Ameriflux, and NEON, exemplify this issue by requiring large amounts of data to be processed from dispersed measurement sites. Eddy-covariance data from across the NEON network are expected to amount to 100 Gigabytes per day. The complexity of the algorithmic processing necessary to produce high-quality data products together with the continued development of new analysis techniques led to the development of a modular R-package, eddy4R. This allows algorithms provided by NEON and the larger community to be deployed in streaming processing, and to be used by community members alike. In order to control the processing environment, provide a proficient parallel processing structure, and certify dependencies are available during processing, we chose Docker as our "Development and Operations" (DevOps) platform. The Docker framework allows our processing algorithms to be developed, maintained and deployed at scale. Additionally, the eddy4R-Docker framework fosters community use and extensibility via pre-built Docker images and the Github distributed version control system. The capability to process large data sets is reliant upon efficient input and output of data, data compressibility to reduce compute resource loads, and the ability to easily package metadata. The Hierarchical Data Format (HDF5) is a file format that can meet these needs. A NEON standard HDF5 file structure and metadata attributes allow users to explore larger data sets in an intuitive "directory-like" structure adopting the NEON data product naming conventions.

  7. 31 CFR 235.5 - Reclamation amounts.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Reclamation amounts. 235.5 Section 235.5 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE... ON DESIGNATED DEPOSITARIES § 235.5 Reclamation amounts. Amounts received by way of reclamation on...

  8. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  9. 13 CFR 400.202 - Loan amount.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Loan amount. 400.202 Section 400.202 Business Credit and Assistance EMERGENCY STEEL GUARANTEE LOAN BOARD EMERGENCY STEEL GUARANTEE LOAN PROGRAM Steel Guarantee Loans § 400.202 Loan amount. (a) The aggregate amount of loan principal guaranteed...

  10. 13 CFR 500.202 - Loan amount.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Loan amount. 500.202 Section 500.202 Business Credit and Assistance EMERGENCY OIL AND GAS GUARANTEED LOAN BOARD EMERGENCY OIL AND GAS GUARANTEED LOAN PROGRAM Oil and Gas Guaranteed Loans § 500.202 Loan amount. The aggregate amount of loan...

  11. A review of concentrated flow erosion processes on rangelands: fundamental understanding and knowledge gaps

    Science.gov (United States)

    Concentrated flow erosion processes are distinguished from splash and sheetflow processes in their enhanced ability to mobilize and transport large amounts of soil, water and dissolved elements. On rangelands, soil, nutrients and water are scarce and only narrow margins of resource losses are tolera...

  12. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  13. Organizing data in arable farming : towards an ontology of processing potato

    NARCIS (Netherlands)

    Haverkort, A.J.; Top, J.L.; Verdenius, F.

    2006-01-01

    Arable farmers and their suppliers, consultants and procurers are increasingly dealing with gathering and processing of large amounts of data. Data sources are related to mandatory and voluntary registration (certification, tracing and tracking, quality control). Besides data collected for

  14. Energy Inputs Uncertainty: Total Amount, Distribution and Correlation Between Different Forms of Energy

    Science.gov (United States)

    Deng, Yue

    2014-01-01

    Describes solar energy inputs contributing to ionospheric and thermospheric weather processes, including total energy amounts, distributions and the correlation between particle precipitation and Poynting flux.

  15. 45 CFR 32.8 - Amounts withheld.

    Science.gov (United States)

    2010-10-01

    ...) of this section, or (ii) An amount equal to 25% of the debtor's disposable pay less the amount(s... first pay day after the employer receives the order. However, if the first pay day is within 10 days after receipt of the order, the employer may begin deductions on the second pay day. (k) An employer may...

  16. The power of event-driven analytics in Large Scale Data Processing

    CERN Multimedia

    CERN. Geneva; Marques, Paulo

    2011-01-01

    FeedZai is a software company specialized in creating high-­‐throughput low-­‐latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­‐driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­‐time web-­‐based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­‐20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­‐scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in...

  17. Progress in Root Cause and Fault Propagation Analysis of Large-Scale Industrial Processes

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2012-01-01

    Full Text Available In large-scale industrial processes, a fault can easily propagate between process units due to the interconnections of material and information flows. Thus the problem of fault detection and isolation for these processes is more concerned about the root cause and fault propagation before applying quantitative methods in local models. Process topology and causality, as the key features of the process description, need to be captured from process knowledge and process data. The modelling methods from these two aspects are overviewed in this paper. From process knowledge, structural equation modelling, various causal graphs, rule-based models, and ontological models are summarized. From process data, cross-correlation analysis, Granger causality and its extensions, frequency domain methods, information-theoretical methods, and Bayesian nets are introduced. Based on these models, inference methods are discussed to find root causes and fault propagation paths under abnormal situations. Some future work is proposed in the end.

  18. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  19. Implicit and Explicit Cognitive Processes in Incidental Vocabulary Acquisition

    Science.gov (United States)

    Ender, Andrea

    2016-01-01

    Studies on vocabulary acquisition in second language learning have revealed that a large amount of vocabulary is learned without an overt intention, in other words, incidentally. This article investigates the relevance of different lexical processing strategies for vocabulary acquisition when reading a text for comprehension among 24 advanced…

  20. Language-agnostic processing of microblog data with text embeddings

    NARCIS (Netherlands)

    Chrupala, Grzegorz

    2014-01-01

    A raw stream of posts from a microblogging platform such as Twitter contains text written in a large variety of languages and writing systems, in registers ranging from formal to internet slang. A significant amount has been expended in recent years to adapt standard NLP processing pipelines to be

  1. Buffer mass test - data aquisition and data processing systems

    International Nuclear Information System (INIS)

    Hagvall, B.

    1982-08-01

    This report describes data aquisition and data processing systems used for the Buffer Mass Test at Stripa. A data aquisition system, designed mainly to provide high reliability, in Stripa produces raw-data log tapes. Copies of these tapes are mailed to the computer center at the University of Luleaa for processing of raw-data. The computer systems in Luleaa offer a wide range of processing facilities: large mass storage units, several plotting facilities, programs for processing and monitoring of vast amounts of data, etc.. (Author)

  2. An experimental investigation of the combustion process of a heavy-duty diesel engine enriched with H{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Liew, C.; Li, H.; Nuszkowski, J.; Liu, S.; Gatts, T.; Atkinson, R.; Clark, N. [Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV 26506-6106 (United States)

    2010-10-15

    This paper investigated the effect of hydrogen (H{sub 2}) addition on the combustion process of a heavy-duty diesel engine. The addition of a small amount of H{sub 2} was shown to have a mild effect on the cylinder pressure and combustion process. When operated at high load, the addition of a relatively large amount of H{sub 2} substantially increased the peak cylinder pressure and the peak heat release rate. Compared to the two-stage combustion process of diesel engines, a featured three-stage combustion process of the H{sub 2}-diesel dual fuel engine was observed. The extremely high peak heat release rate represented a combination of diesel diffusion combustion and the premixed combustion of H{sub 2} consumed by multiple turbulent flames, which substantially enhanced the combustion process of H{sub 2}-diesel dual fuel engine. However, the addition of a relatively large amount of H{sub 2} at low load did not change the two-stage heat release process pattern. The premixed combustion was dramatically inhibited while the diffusion combustion was slightly enhanced and elongated. The substantially reduced peak cylinder pressure at low load was due to the deteriorated premixed combustion. (author)

  3. COPASutils: an R package for reading, processing, and visualizing data from COPAS large-particle flow cytometers.

    Directory of Open Access Journals (Sweden)

    Tyler C Shimko

    Full Text Available The R package COPASutils provides a logical workflow for the reading, processing, and visualization of data obtained from the Union Biometrica Complex Object Parametric Analyzer and Sorter (COPAS or the BioSorter large-particle flow cytometers. Data obtained from these powerful experimental platforms can be unwieldy, leading to difficulties in the ability to process and visualize the data using existing tools. Researchers studying small organisms, such as Caenorhabditis elegans, Anopheles gambiae, and Danio rerio, and using these devices will benefit from this streamlined and extensible R package. COPASutils offers a powerful suite of functions for the rapid processing and analysis of large high-throughput screening data sets.

  4. 45 CFR 149.100 - Amount of reimbursement.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Amount of reimbursement. 149.100 Section 149.100... REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reinsurance Amounts § 149.100 Amount of reimbursement... reimbursement in the amount of 80 percent of the costs for health benefits (net of negotiated price concessions...

  5. Method for removing trace contaminants from multicurie amounts of 144Ce

    International Nuclear Information System (INIS)

    Wagner, J.A.; Kanapilly, G.M.

    1976-01-01

    Removal of contaminants from stock solutions of 144 Ce(III) was required for large quantities of 144 Ce prior to incorporation into fused aluminosilicate particles for inhalation toxicology studies. Since available procedures for purification of 144 Ce could not be readily adapted to our laboratory conditions and requirements, a simple procedure was developed to purify 144 Ce in multicurie quantities of 144 Ce(III). This procedure consists of separation of 144 Ce from contaminants by precipitation and filtrations at different pH. Its simplicity and efficacy in providing a stock solution that would readily exchange into montmorillonite clay was demonstrated when it was used during the preparation of large amounts of 144 Ce in fused aluminosilicate particles

  6. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  7. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  8. Large deviations for the Fleming-Viot process with neutral mutation and selection

    OpenAIRE

    Dawson, Donald; Feng, Shui

    1998-01-01

    Large deviation principles are established for the Fleming-Viot processes with neutral mutation and selection, and the corresponding equilibrium measures as the sampling rate goes to 0. All results are first proved for the finite allele model, and then generalized, through the projective limit technique, to the infinite allele model. Explicit expressions are obtained for the rate functions.

  9. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  10. Methods to isolate a large amount of generative cells, sperm cells and vegetative nuclei from tomato pollen for omics analysis

    Directory of Open Access Journals (Sweden)

    Yunlong eLu

    2015-06-01

    Full Text Available The development of sperm cells from microspores involves a set of finely regulated molecular and cellular events and the coordination of these events. The mechanisms underlying these events and their interconnections remain a major challenge. Systems analysis of genome-wide molecular networks and functional modules with high-throughput omics approaches is crucial for understanding the mechanisms; however, this study is hindered because of the difficulty in isolating a large amount of cells of different types, especially generative cells (GCs, from the pollen. Here, we optimized the conditions of tomato pollen germination and pollen tube growth to allow for long-term growth of pollen tubes in vitro with sperm cells (SCs generated in the tube. Using this culture system, we developed methods for isolating GCs, SCs and vegetative-cell nuclei (VN from just-germinated tomato pollen grains and growing pollen tubes and their purification by Percoll density gradient centrifugation. The purity and viability of isolated GCs and SCs were confirmed by microscopy examination and fluorescein diacetate staining, respectively, and the integrity of VN was confirmed by propidium iodide staining. We could obtain about 1.5 million GCs and 2.0 million SCs each from 180 mg initiated pollen grains, and 10 million VN from 270 mg initiated pollen grains germinated in vitro in each experiment. These methods provide the necessary preconditions for systematic biology studies of SC development and differentiation in higher plants.

  11. Visual attention mitigates information loss in small- and large-scale neural codes.

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Mass Processing of Sentinel-1 Images for Maritime Surveillance

    Directory of Open Access Journals (Sweden)

    Carlos Santamaria

    2017-07-01

    Full Text Available The free, full and open data policy of the EU’s Copernicus programme has vastly increased the amount of remotely sensed data available to both operational and research activities. However, this huge amount of data calls for new ways of accessing and processing such “big data”. This paper focuses on the use of Copernicus’s Sentinel-1 radar satellite for maritime surveillance. It presents a study in which ship positions have been automatically extracted from more than 11,500 Sentinel-1A images collected over the Mediterranean Sea, and compared with ship position reports from the Automatic Identification System (AIS. These images account for almost all the Sentinel-1A acquisitions taken over the area during the two-year period from the start of the operational phase in October 2014 until September 2016. A number of tools and platforms developed at the European Commission’s Joint Research Centre (JRC that have been used in the study are described in the paper. They are: (1 Search for Unidentified Maritime Objects (SUMO, a tool for ship detection in Synthetic Aperture Radar (SAR images; (2 the JRC Earth Observation Data and Processing Platform (JEODPP, a platform for efficient storage and processing of large amounts of satellite images; and (3 Blue Hub, a maritime surveillance GIS and data fusion platform. The paper presents the methodology and results of the study, giving insights into the new maritime surveillance knowledge that can be gained by analysing such a large dataset, and the lessons learnt in terms of handling and processing the big dataset.

  13. Theoretical implications for the estimation of dinitrogen fixation by large perennial plant species using isotope dilution

    Science.gov (United States)

    Dwight D. Baker; Maurice Fried; John A. Parrotta

    1995-01-01

    Estimation of symbiotic N2 fixation associated with large perennial plant species, especially trees, poses special problems because the process must be followed over a potentially long period of time to integrate the total amount of fixation. Estimations using isotope dilution methodology have begun to be used for trees in field studies. Because...

  14. Exploring the Amount and Type of Writing Instruction during Language Arts Instruction in Kindergarten Classrooms

    Science.gov (United States)

    Puranik, Cynthia S.; Al Otaiba, Stephanie; Sidler, Jessica Folsom; Greulich, Luana

    2014-01-01

    The objective of this exploratory investigation was to examine the nature of writing instruction in kindergarten classrooms and to describe student writing outcomes at the end of the school year. Participants for this study included 21 teachers and 238 kindergarten children from nine schools. Classroom teachers were videotaped once each in the fall and winter during the 90 minute instructional block for reading and language arts to examine time allocation and the types of writing instructional practices taking place in the kindergarten classrooms. Classroom observation of writing was divided into student-practice variables (activities in which students were observed practicing writing or writing independently) and teacher-instruction variables (activities in which the teacher was observed providing direct writing instruction). In addition, participants completed handwriting fluency, spelling, and writing tasks. Large variability was observed in the amount of writing instruction occurring in the classroom, the amount of time kindergarten teachers spent on writing and in the amount of time students spent writing. Marked variability was also observed in classroom practices both within and across schools and this fact was reflected in the large variability noted in kindergartners’ writing performance. PMID:24578591

  15. Exploring the Amount and Type of Writing Instruction during Language Arts Instruction in Kindergarten Classrooms.

    Science.gov (United States)

    Puranik, Cynthia S; Al Otaiba, Stephanie; Sidler, Jessica Folsom; Greulich, Luana

    2014-02-01

    The objective of this exploratory investigation was to examine the nature of writing instruction in kindergarten classrooms and to describe student writing outcomes at the end of the school year. Participants for this study included 21 teachers and 238 kindergarten children from nine schools. Classroom teachers were videotaped once each in the fall and winter during the 90 minute instructional block for reading and language arts to examine time allocation and the types of writing instructional practices taking place in the kindergarten classrooms. Classroom observation of writing was divided into student-practice variables (activities in which students were observed practicing writing or writing independently) and teacher-instruction variables (activities in which the teacher was observed providing direct writing instruction). In addition, participants completed handwriting fluency, spelling, and writing tasks. Large variability was observed in the amount of writing instruction occurring in the classroom, the amount of time kindergarten teachers spent on writing and in the amount of time students spent writing. Marked variability was also observed in classroom practices both within and across schools and this fact was reflected in the large variability noted in kindergartners' writing performance.

  16. Seismic Shot Processing on GPU

    OpenAIRE

    Johansen, Owe

    2009-01-01

    Today s petroleum industry demand an ever increasing amount of compu- tational resources. Seismic processing applications in use by these types of companies have generally been using large clusters of compute nodes, whose only computing resource has been the CPU. However, using Graphics Pro- cessing Units (GPU) for general purpose programming is these days becoming increasingly more popular in the high performance computing area. In 2007, NVIDIA corporation launched their framework for develo...

  17. Influence of social norms and palatability on amount consumed and food choice.

    Science.gov (United States)

    Pliner, Patricia; Mann, Nikki

    2004-04-01

    In two parallel studies, we examined the effect of social influence and palatability on amount consumed and on food choice. In Experiment 1, which looked at amount consumed, participants were provided with either palatable or unpalatable food; they were also given information about how much previous participants had eaten (large or small amounts) or were given no information. In the case of palatable food, participants ate more when led to believe that prior participants had eaten a great deal than when led to believe that prior participants had eaten small amounts or when provided with no information. This social-influence effect was not present when participants received unpalatable food. In Experiment 2, which looked at food choice, some participants learned that prior participants had chosen the palatable food, others learned that prior participants had chosen the unpalatable food, while still others received no information about prior participants' choices. The social-influence manipulation had no effect on participants' food choices; nearly all of them chose the palatable food. The results were discussed in the context of Churchfield's (1995) distinction between judgments about matters of fact and judgments about preferences. The results were also used to illustrate the importance of palatability as a determinant of eating behavior.

  18. On interaction of large dust grains with fusion plasma

    International Nuclear Information System (INIS)

    Krasheninnikov, S. I.; Smirnov, R. D.

    2009-01-01

    So far the models used to study dust grain-plasma interactions in fusion plasmas neglect the effects of dust material vapor, which is always present around dust in rather hot and dense edge plasma environment in fusion devices. However, when the vapor density and/or the amount of ionized vapor atoms become large enough, they can alter the grain-plasma interactions. Somewhat similar processes occur during pellet injection in fusion plasma. In this brief communication the applicability limits of the models ignoring vapor effects in grain-plasma interactions are obtained.

  19. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  20. Design Principles for Improving the Process of Publishing Open data

    NARCIS (Netherlands)

    Zuiderwijk, A.M.G.; Janssen, M.F.W.H.A.; Choenni, R.; Meijer, R.F.

    2014-01-01

    · Purpose: Governments create large amounts of data. However, the publication of open data is often cumbersome and there are no standard procedures and processes for opening data. This blocks the easy publication of government data. The purpose of this paper is to derive design principles for

  1. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  2. Carbon-Nitrogen Relationships during the Humification of Cellulose in Soils Containing Different Amounts of Clay

    DEFF Research Database (Denmark)

    Sørensen, Lasse Holst

    1981-01-01

    the 2 soils with the high content of clay had a relatively high content of available unlabeled soil-N which was used for synthesis of metabolic material. The proportionate retention of labeled C for a given soil was largely independent of the size of the amendments, whereas the proportionate amount....... Some of the labeled organic N when mineralized was re-incorporated into organic compounds containing increasing proportions of native soil-C, whereas labeled C when mineralized as CO2 disappeared from the soils. The amount of labeled amino acid-C, formed during decomposition of the labeled cellulose...... was most intense, and it held throughout the 4 yr of the incubation; proportionally it was independent of the amount of cellulose added and the temperature. The labeled amino acid-N content was not directly related to the amount of clay in the soil, presumably because more unlabeled soil-N was available...

  3. Total process surveillance: (TOPS)

    International Nuclear Information System (INIS)

    Millar, J.H.P.

    1992-01-01

    A Total Process Surveillance system is under development which can provide, in real-time, additional process information from a limited number of raw measurement signals. This is achieved by using a robust model based observer to generate estimates of the process' internal states. The observer utilises the analytical reduncancy among a diverse range of transducers and can thus accommodate off-normal conditions which lead to transducer loss or damage. The modular hierarchical structure of the system enables the maximum amount of information to be assimilated from the available instrument signals no matter how diverse. This structure also constitutes a data reduction path thus reducing operator cognitive overload from a large number of varying, and possibly contradictory, raw plant signals. (orig.)

  4. Disentangling the Effects of Precipitation Amount and Frequency on the Performance of 14 Grassland Species

    Science.gov (United States)

    Didiano, Teresa J.; Johnson, Marc T. J.; Duval, Tim P.

    2016-01-01

    Climate change is causing shifts in the amount and frequency of precipitation in many regions, which is expected to have implications for plant performance. Most research has examined the impacts of the amount of precipitation on plants rather than the effects of both the amount and frequency of precipitation. To understand how climate-driven changes in precipitation can affect grassland plants, we asked: (i) How does the amount and frequency of precipitation affect plant performance? (ii) Do plant functional groups vary in their response to variable precipitation? To answer these questions we grew 14 monocot and eudicot grassland species and conducted a factorial manipulation of the amount (70 vs 90mm/month) and frequency (every 3, 15, or 30 days) of precipitation under rainout shelters. Our results show that both the amount and frequency of precipitation impact plant performance, with larger effects on eudicots than monocots. Above- and below-ground biomass were affected by the amount of precipitation and/or the interaction between the amount and frequency of precipitation. Above-ground biomass increased by 21–30% when the amount of precipitation was increased. When event frequency was decreased from 3 to 15 or 30 days, below-ground biomass generally decreased by 18–34% in the 70 mm treatment, but increased by 33–40% in the 90 mm treatment. Changes in stomatal conductance were largely driven by changes in event frequency. Our results show that it is important to consider changes in both the amount and frequency of precipitation when predicting how plant communities will respond to variable precipitation. PMID:27622497

  5. Large-scale continuous process to vitrify nuclear defense waste: operating experience with nonradioactive waste

    International Nuclear Information System (INIS)

    Cosper, M.B.; Randall, C.T.; Traverso, G.M.

    1982-01-01

    The developmental program underway at SRL has demonstrated the vitrification process proposed for the sludge processing facility of the DWPF on a large scale. DWPF design criteria for production rate, equipment lifetime, and operability have all been met. The expected authorization and construction of the DWPF will result in the safe and permanent immobilization of a major quantity of existing high level waste. 11 figures, 4 tables

  6. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  7. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  8. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  9. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  10. Large-scale volcanism associated with coronae on Venus

    Science.gov (United States)

    Roberts, K. Magee; Head, James W.

    1993-01-01

    The formation and evolution of coronae on Venus are thought to be the result of mantle upwellings against the crust and lithosphere and subsequent gravitational relaxation. A variety of other features on Venus have been linked to processes associated with mantle upwelling, including shield volcanoes on large regional rises such as Beta, Atla and Western Eistla Regiones and extensive flow fields such as Mylitta and Kaiwan Fluctus near the Lada Terra/Lavinia Planitia boundary. Of these features, coronae appear to possess the smallest amounts of associated volcanism, although volcanism associated with coronae has only been qualitatively examined. An initial survey of coronae based on recent Magellan data indicated that only 9 percent of all coronae are associated with substantial amounts of volcanism, including interior calderas or edifices greater than 50 km in diameter and extensive, exterior radial flow fields. Sixty-eight percent of all coronae were found to have lesser amounts of volcanism, including interior flooding and associated volcanic domes and small shields; the remaining coronae were considered deficient in associated volcanism. It is possible that coronae are related to mantle plumes or diapirs that are lower in volume or in partial melt than those associated with the large shields or flow fields. Regional tectonics or variations in local crustal and thermal structure may also be significant in determining the amount of volcanism produced from an upwelling. It is also possible that flow fields associated with some coronae are sheet-like in nature and may not be readily identified. If coronae are associated with volcanic flow fields, then they may be a significant contributor to plains formation on Venus, as they number over 300 and are widely distributed across the planet. As a continuation of our analysis of large-scale volcanism on Venus, we have reexamined the known population of coronae and assessed quantitatively the scale of volcanism associated

  11. The Influence of Poly Vinil Alcohol Amount and Temperatures of the SolProcess on the Formation of (NH4)2U2O7 Gel

    International Nuclear Information System (INIS)

    Indra-Suryawan; Bangun-Wasito; Damunir; Hidayati; Setyo-Sulardi; Bambang-Siswanto; Ari-Handayani

    2000-01-01

    Research on the influence of poly vinil alcohol and temperatures on theresulted sol solutions. The sols were fed into gelation process using ammoniamedium and resulted were (NH 4 ) 2 U 2 O 7 gels spherical. The PVA variablesaddition on the sol process were 10; 15; 20; 25 and 30 g/1l uranyl nitrates.The temperature variables on the sol process were 75, 80, 85, 90 and 95 o C.The sol which successfully resulted (NH 4 ) 2 U 2 O 7 gel the gelation processwas on the PVA amount of 15 g and 90 o C. The resulted gel was then washed,dried and calcinate. The characterization of shape and surface of gel havebeen carried out using SEM photography. The density of gels were measured.For the 15 g PVA addition, the density was 8.3710 g/ml. While for process at90 o C, the density was 7.8871 g/ml. (author)

  12. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  13. Oxygen isotopes in tree rings record variation in precipitation δ18O and amount effects in the south of Mexico.

    Science.gov (United States)

    Brienen, Roel J W; Hietz, Peter; Wanek, Wolfgang; Gloor, Manuel

    2013-12-01

    [1] Natural archives of oxygen isotopes in precipitation may be used to study changes in the hydrological cycle in the tropics, but their interpretation is not straightforward. We studied to which degree tree rings of Mimosa acantholoba from southern Mexico record variation in isotopic composition of precipitation and which climatic processes influence oxygen isotopes in tree rings ( δ 18 O tr ). Interannual variation in δ 18 O tr was highly synchronized between trees and closely related to isotopic composition of rain measured at San Salvador, 710 km to the southwest. Correlations with δ 13 C, growth, or local climate variables (temperature, cloud cover, vapor pressure deficit (VPD)) were relatively low, indicating weak plant physiological influences. Interannual variation in δ 18 O tr correlated negatively with local rainfall amount and intensity. Correlations with the amount of precipitation extended along a 1000 km long stretch of the Pacific Central American coast, probably as a result of organized storm systems uniformly affecting rainfall in the region and its isotope signal; episodic heavy precipitation events, of which some are related to cyclones, deposit strongly 18 O-depleted rain in the region and seem to have affected the δ 18 O tr signal. Large-scale controls on the isotope signature include variation in sea surface temperatures of tropical north Atlantic and Pacific Ocean. In conclusion, we show that δ 18 O tr of M . acantholoba can be used as a proxy for source water δ 18 O and that interannual variation in δ 18 O prec is caused by a regional amount effect. This contrasts with δ 18 O signatures at continental sites where cumulative rainout processes dominate and thus provide a proxy for precipitation integrated over a much larger scale. Our results confirm that processes influencing climate-isotope relations differ between sites located, e.g., in the western Amazon versus coastal Mexico, and that tree ring isotope records can help in

  14. Comparison of the amount of apical debris extrusion associated with different retreatment systems and supplementary file application during retreatment process.

    Science.gov (United States)

    Çiçek, Ersan; Koçak, Mustafa Murat; Koçak, Sibel; Sağlam, Baran Can

    2016-01-01

    The type of instrument affects the amount of debris extruded. The aim of this study was to compare the effect of retreatment systems and supplementary file application on the amount of apical debris extrusion. Forty-eight extracted mandibular premolars with a single canal and similar length were selected. The root canals were prepared with the ProTaper Universal system with a torque-controlled engine. The root canals were dried and were obturated using Gutta-percha and sealer. The specimens were randomly divided into four equal groups according to the retreatment procedures (Group 1, Mtwo retreatment files; Group 2, Mtwo retreatment files + Mtwo rotary file #30 supplementary file; Group 3, ProTaper Universal retreatment (PTUR) files; and Group 4, PTUR files + ProTaper F3 supplementary file). The extruded debris during instrumentation were collected into preweighed Eppendorf tubes. The amount of apically extruded debris was calculated by subtracting the initial weight of the tube from the final weight. Three consecutive weights were obtained for each tube. No statistically significant difference was found in the amount of apically extruded debris between Groups 1 and 3 (P = 0.590). A significant difference was observed between Groups 1 and 2 (P file significantly increased the amount of apically extruded debris.

  15. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang , Aiguo; Luo , Chagen; Zhou , Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  16. Accelerated decomposition techniques for large discounted Markov decision processes

    Science.gov (United States)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-12-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  17. Application of large radiation sources in chemical processing industry

    International Nuclear Information System (INIS)

    Krishnamurthy, K.

    1977-01-01

    Large radiation sources and their application in chemical processing industry are described. A reference has also been made to the present developments in this field in India. Radioactive sources, notably 60 Co, are employed in production of wood-plastic and concrete-polymer composites, vulcanised rubbers, polymers, sulfochlorinated paraffin hydrocarbons and in a number of other applications which require deep penetration and high reliability of source. Machine sources of electrons are used in production of heat shrinkable plastics, insulation materials for cables, curing of paints etc. Radiation sources have also been used for sewage hygienisation. As for the scene in India, 60 Co sources, gamma chambers and batch irradiators are manufactured. A list of the on-going R and D projects and organisations engaged in research in this field is given. (M.G.B.)

  18. Curbing variations in packaging process through Six Sigma way in a large-scale food-processing industry

    Science.gov (United States)

    Desai, Darshak A.; Kotadiya, Parth; Makwana, Nikheel; Patel, Sonalinkumar

    2015-03-01

    Indian industries need overall operational excellence for sustainable profitability and growth in the present age of global competitiveness. Among different quality and productivity improvement techniques, Six Sigma has emerged as one of the most effective breakthrough improvement strategies. Though Indian industries are exploring this improvement methodology to their advantage and reaping the benefits, not much has been presented and published regarding experience of Six Sigma in the food-processing industries. This paper is an effort to exemplify the application of Six Sigma quality improvement drive to one of the large-scale food-processing sectors in India. The paper discusses the phase wiz implementation of define, measure, analyze, improve, and control (DMAIC) on one of the chronic problems, variations in the weight of milk powder pouch. The paper wraps up with the improvements achieved and projected bottom-line gain to the unit by application of Six Sigma methodology.

  19. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    Science.gov (United States)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  20. Operational experinece with large scale biogas production at the promest manure processing plant in Helmond, the Netherlands

    International Nuclear Information System (INIS)

    Schomaker, A.H.H.M.

    1992-01-01

    In The Netherlands a surplus of 15 million tons of liquid pig manure is produced yearly on intensive pig breeding farms. The dutch government has set a three-way policy to reduce this excess of manure: 1. conversion of animal fodder into a product with less and better ingestible nutrients; 2. distribution of the surplus to regions with a shortage of animal manure; 3. processing of the remainder of the surplus in large scale processing plants. The first large scale plant for the processing of liquid pig manure was put in operation in 1988 as a demonstration plant at Promest in Helmond. The design capacity of this plant is 100,000 tons of pig manure per year. The plant was initiated by the Manure Steering Committee of the province Noord-Brabant in order to prove at short notice whether large scale manure processing might contribute to the solution of the problem of the manure surplus in The Netherlands. This steering committee is a corporation of the national and provincial government and the agricultural industrial life. (au)

  1. Recycle attuned catalytic exchange (RACE) for reliable and low inventory processing of highly tritiated water

    International Nuclear Information System (INIS)

    Iseli, M.; Schaub, M.; Ulrich, D.

    1992-01-01

    The detritiation of highly tritiated water by liquid phase catalytic exchange needs dilution of the feed with water to tritium concentrations suitable for catalyst and safety rules and to assure flow rates large enough for wetting the catalyst. Dilution by recycling detritiated water from within the exchange process has three advantages: the amount and concentration of the water for dilution is controlled within the exchange process, there is no additional water load to processes located downstream RACE, and the ratio of gas to liquid flow rates in the exchange column could be adjusted by using several recycles differing in amount and concentration to avoid an excessively large number of theoretical separation stages. In this paper, the flexibility of the recycle attuned catalytic exchange (RACE) and its effect on the cryogenic distillation are demonstrated for the detritiation of the highly tritiated water from a tritium breeding blanket

  2. Risk Management of Large Component in Decommissioning

    International Nuclear Information System (INIS)

    Nah, Kyung Ku; Kim, Tae Ryong

    2014-01-01

    The need for energy, especially electric energy, has been dramatically increasing in Korea. Therefore, a rapid growth in nuclear power development has been achieved to have about 30% of electric power production. However, such a large nuclear power generation has been producing a significant amount of radioactive waste and other matters such as safety issue. In addition, owing to the severe accidents at the Fukushima in Japan, public concerns regarding NPP and radiation hazard have greatly increased. In Korea, the operation of KORI 1 has been scheduled to be faced with end of lifetime in several years and Wolsong 1 has been being under review for extending its life. This is the reason why the preparation of nuclear power plant decommissioning is significant in this time. Decommissioning is the final phase in the life-cycle of a nuclear facility and during decommissioning operation, one of the most important management in decommissioning is how to deal with the disused large component. Therefore, in this study, the risk in large component in decommissioning is to be identified and the key risk factor is to be analyzed from where can be prepared to handle decommissioning process safely and efficiently. Developing dedicated acceptance criteria for large components at disposal site was analyzed as a key factor. Acceptance criteria applied to deal with large components like what size of those should be and how to be taken care of during disposal process strongly affect other major works. For example, if the size of large component was not set up at disposal site, any dismantle work in decommissioning is not able to be conducted. Therefore, considering insufficient time left for decommissioning of some NPP, it is absolutely imperative that those criteria should be laid down

  3. Risk Management of Large Component in Decommissioning

    Energy Technology Data Exchange (ETDEWEB)

    Nah, Kyung Ku; Kim, Tae Ryong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-10-15

    The need for energy, especially electric energy, has been dramatically increasing in Korea. Therefore, a rapid growth in nuclear power development has been achieved to have about 30% of electric power production. However, such a large nuclear power generation has been producing a significant amount of radioactive waste and other matters such as safety issue. In addition, owing to the severe accidents at the Fukushima in Japan, public concerns regarding NPP and radiation hazard have greatly increased. In Korea, the operation of KORI 1 has been scheduled to be faced with end of lifetime in several years and Wolsong 1 has been being under review for extending its life. This is the reason why the preparation of nuclear power plant decommissioning is significant in this time. Decommissioning is the final phase in the life-cycle of a nuclear facility and during decommissioning operation, one of the most important management in decommissioning is how to deal with the disused large component. Therefore, in this study, the risk in large component in decommissioning is to be identified and the key risk factor is to be analyzed from where can be prepared to handle decommissioning process safely and efficiently. Developing dedicated acceptance criteria for large components at disposal site was analyzed as a key factor. Acceptance criteria applied to deal with large components like what size of those should be and how to be taken care of during disposal process strongly affect other major works. For example, if the size of large component was not set up at disposal site, any dismantle work in decommissioning is not able to be conducted. Therefore, considering insufficient time left for decommissioning of some NPP, it is absolutely imperative that those criteria should be laid down.

  4. Effects of interactive transport and scavenging of smoke on the calculated temperature change resulting from large amounts of smoke

    International Nuclear Information System (INIS)

    MacCracken, M.C.; Walton, J.J.

    1984-12-01

    Several theoretical studies with numerical models have shown that substantial land-surface cooling can occur if very large amounts (approx. 100 x 10 12 = 100 Tg) of highly absorbing sooty-particles are injected high into the troposphere and spread instantaneously around the hemisphere (Turco et al., 1983; Covey et al. 1984; MacCracken, 1983). A preliminary step beyond these initial calculations has been made by interactively coupling the two-layer, three-dimensional Oregon State University general circulation model (GCM) to the three-dimensional GRANTOUR trace species model developed at the Lawrence Livermore National Laboratory. The GCM simulation includes treatment of tropospheric dynamics and thermodynamics and the effect of soot on solar radiation. The GRANTOUR simulation includes treatment of particle transport and scavenging by precipitation, although no satisfactory verification of the scavenging algorithm has yet been possible. We have considered the climatic effects of 150 Tg (i.e., the 100 Mt urban war scenario from Turco et al., 1983) and of 15 Tg of smoke from urban fires over North America and Eurasia. Starting with a perpetual July atmospheric situation, calculation of the climatic effects as 150 Tg of smoke are spread slowly by the winds, rather than instantaneously dispersed as in previous calculations, leads to some regions of greater cooling under the denser parts of the smoke plumes and some regions of less severe cooling where smoke arrival is delayed. As for the previous calculations, mid-latitude decreases of land surface air temperature for the 150 Tg injection are greater than 15 0 C after a few weeks. For a 15 Tg injection, however, cooling of more than several degrees centigrade only occurs in limited regions under the dense smoke plumes present in the first few weeks after the injection. 10 references, 9 figures

  5. Implementing ergonomics in large-scale engineering design. Communicating and negotiating requirements in an organizational context

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, Ingrid Anette

    1997-12-31

    This thesis investigates under what conditions ergonomic criteria are being adhered to in engineering design. Specifically, the thesis discusses (1) the ergonomic criteria implementation process, (2) designer recognition of ergonomic requirements and the organization of ergonomics, (3) issues important for the implementation of ergonomic requirements, (4) how different means for experience transfer in design and operation are evaluated by the designers, (5) how designers ensure usability of offshore work places, and (6) how project members experience and cope with the large amount of documentation in large-scale engineering. 84 refs., 11 figs., 18 tabs.

  6. On conservation of the baryon chirality in the processes with large momentum transfer

    International Nuclear Information System (INIS)

    Ioffe, B.L.

    1976-01-01

    The hypothesis of the baryon chirality conservation in the processes with large momentum transfer is suggested and some arguments in its favour are made. Experimental implicatiosns of this assumption for weak and electromagnetic form factors of transitions in the baryon octet and of transitions N → Δ, N → Σsup(*) are considered

  7. Internalization and cellular processing of cholecystokinin in rat pancreatic acinar cells

    International Nuclear Information System (INIS)

    Izzo, R.S.; Pellecchia, C.; Praissman, M.

    1988-01-01

    To evaluate the internalization of cholecystokinin, monoiodinated imidoester of cholecystokinin octapeptide [ 125 I-(IE)-CCK-8] was bound to dispersed pancreatic acinar cells, and surface-bound and internalized radioligand were differentiated by treating with an acidified glycine buffer. The amount of internalized radioligand was four- and sevenfold greater at 24 and 37 degree C than at 4 degree C between 5 and 60 min of association. Specific binding of radioligand to cell surface receptors was not significantly different at these temperatures. Chloroquine, a lysosomotropic agent that blocks intracellular proteolysis, significantly increased the amount of CCK-8 internalized by 18 and 16% at 30 and 60 min of binding, respectively, compared with control. Dithiothreitol (DTT), a sulfhydryl reducing agent, also augmented the amount of CCK-8 radioligand internalized by 25 and 29% at 30 and 60 min, respectively. The effect of chloroquine and DTT on the processing of internalized radioligand was also considered after an initial 60 min of binding of radioligand to acinar cells. After 180 min of processing, the amount of radioligand internalized was significantly greater in the presence of chloroquine compared with controls, whereas the amount of radioligand declined in acinar cells treated with DTT. Internalized and released radioactivity from acinar cells was rebound to pancreatic membrane homogenates to determine the amount of intact radioligand during intracellular processing. Chloroquine significantly increased the amount of intact 125 I-(IE)-CCK-8 radioligand in released and internalized radioactivity while DTT increased the amount of intact radioligand only in internalized samples. This study shows that pancreatic acinar cells rapidly internalize large amounts of CCK-8 and that chloroquine and DTT inhibit intracellular degradation

  8. Simplified method for the determination of strontium-90 in large amounts of bone-ash

    International Nuclear Information System (INIS)

    Patti, F.; Jeanmaire, L.

    1966-06-01

    The principle of the determination is based on a 3-step process: 1) concentrating the strontium by attacking the ash with nitric acid; 2) elimination of residual phosphoric ions by a double precipitation of strontium oxalate; and 3) extraction of yttrium 90, counted in the oxalate form. The advantages of the method: -) using simple techniques it makes it possible to process 50 g of ash; -) the initial concentration of strontium considerably reduces the volume of the solutions as well as the size of precipitates handled. Fuming nitric acid is used in a specially designed burette. (authors) [fr

  9. Preparation of a large-scale and multi-layer molybdenum crystal and its characteristics

    International Nuclear Information System (INIS)

    Fujii, Tadayuki

    1989-01-01

    In the present work, the secondary recrystallization method was applied to obtain a large-scale and multi-layer crystal from a hot-rolled multi-laminated molybdenum sheet doped and stacked alternately with different amounts of dopant. It was found that the time and/or temperature at which secondary recrystallization commence from the multi- layer sheet is strongly dependent on the amounts of dopants. Therefore the potential nucleus of the secondary grain from layers with different amounts of dopant occurred first at the layer with a small amount of dopant and then grew into the layer with a large amount of dopant after an anneal at 1800 0 C-2000 0 C. Consequently a large -scale and multi-layer molybdenum crystal can easily be obtained. 12 refs., 9 figs., 2 tabs. (Author)

  10. Semi-supervised learning and domain adaptation in natural language processing

    CERN Document Server

    Søgaard, Anders

    2013-01-01

    This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias.This book is intended to be both

  11. Genetic 'fingerprints' to characterise microbial communities during organic overloading and in large-scale biogas plants

    Energy Technology Data Exchange (ETDEWEB)

    Kleyboecker, A.; Lerm, S.; Vieth, A.; Wuerdemann, H. [GeoForschungsZentrum Potsdam, Bio-Geo-Engineering, Potsdam (Germany); Miethling-Graff, R. [Bundesforschungsanstalt fuer Landwirtschaft, Braunschweig (Germany). Inst. fuer Agraroekologie; Wittmaier, M. [Institut fuer Kreislaufwirtschaft, Bremen (Germany)

    2007-07-01

    Since fermentation is a complex process, biogas reactors are still known as 'black boxes'. Mostly they are not run at their maximum loading rate due to the possible failure in the process by organic overloading. This means that there are still unused capacities to produce more biogas in less time. Investigations of different large-scale biogas plants showed that fermenters are operated containing different amounts of volatile fatty acids. These amounts can vary so much that one of two digestors, both possessing the same VFA concentration, does not produce gas anymore while the other is still at work. A reason for this phenomenon might be found in the composition of the microbial communities or in differences in the operation of the plants. To gain a better understanding of the 'black box', structural changes in microbial communities during controlled organic overloading in a laboratory and biocenosis of large-scale reactors were investigated. A genetic fingerprint based on 16S rDNA (PCR-SSCP) was used to characterise the microbial community. (orig.)

  12. Measurements of mitochondrial spaces are affected by the amount of mitochondria used in the determinations

    International Nuclear Information System (INIS)

    Cohen, N.S.; Cheung, C.W.; Raijman, L.

    1987-01-01

    Mitochondrial (MITL) water spaces were determined by centrifugal filtration, using 3 H 2 O and 14 C-labelled sucrose, mannitol, inulin, and dextran. The volume (in μl/mg of MITL protein) of each of the spaces was inversely proportional to the amount of MIT (mg of protein) centrifuged. For every additional mg of MIT centrifuged, the total water space (in μl/mg of protein) decreased 0.62 μl, the sucrose space 0.50 μl, the intermembrane space 0.16 μl, and the matrix space 0.12 μl. For a given amount of MIT, the volume of each space was the same when centrifugation was done at 8000 and at 15,600g, and when the MIT were incubated with the markers for 15 sec to 5 min, indicating that sucrose, mannitol and inulin do not penetrate the matrix, nor dextran the intermembrane space, under the incubation and centrifugation conditions generally used to measure MITL spaces. They conclude that: (a) calculations of the concentration of compounds in the matrix or intermembrane space may contain large errors unless the same amount of MIT is used to measure MITL spaces and the compounds of interest; (b) large errors in the calculation of transport rates, proton-motive force, etc., may arise from errors originating as in (a) above; (c) disagreements found in the literature regarding, for example, the size of the sucrose space, may have arisen from the use of different amounts of MIT in different work

  13. Large Amounts of Reactivated Virus in Tears Precedes Recurrent Herpes Stromal Keratitis in Stressed Rabbits Latently Infected with Herpes Simplex Virus.

    Science.gov (United States)

    Perng, Guey-Chuen; Osorio, Nelson; Jiang, Xianzhi; Geertsema, Roger; Hsiang, Chinhui; Brown, Don; BenMohamed, Lbachir; Wechsler, Steven L

    2016-01-01

    Recurrent herpetic stromal keratitis (rHSK), due to an immune response to reactivation of herpes simplex virus (HSV-1), can cause corneal blindness. The development of therapeutic interventions such as drugs and vaccines to decrease rHSK have been hampered by the lack of a small and reliable animal model in which rHSK occurs at a high frequency during HSV-1 latency. The aim of this study is to develop a rabbit model of rHSK in which stress from elevated temperatures increases the frequency of HSV-1 reactivations and rHSK. Rabbits latently infected with HSV-1 were subjected to elevated temperatures and the frequency of viral reactivations and rHSK were determined. In an experiment in which rabbits latently infected with HSV-1 were subjected to ill-defined stress as a result of failure of the vivarium air conditioning system, reactivation of HSV-1 occurred at over twice the normal frequency. In addition, 60% of eyes developed severe rHSK compared to tears of that eye and whenever this unusually large amount of reactivated virus was detected in tears, rHSK always appeared 4-5 days later. In subsequent experiments using well defined heat stress the reactivation frequency was similarly increased, but no eyes developed rHSK. The results reported here support the hypothesis that rHSK is associated not simply with elevated reactivation frequency, but rather with rare episodes of very high levels of reactivated virus in tears 4-5 days earlier.

  14. Hadronic processes with large transfer momenta and quark counting rules in multiparticle dual amplitude

    International Nuclear Information System (INIS)

    Akkelin, S.V.; Kobylinskij, N.A.; Martynov, E.S.

    1989-01-01

    A dual N-particle amplitude satisfying the quark counting rules for the processes with large transfer momenta is constructed. The multiparticle channels are shown to give an essential contribution to the amplitude decreasing power in a hard kinematic limit. 19 refs.; 9 figs

  15. A method to study response of large trees to different amounts of available soil water

    Science.gov (United States)

    D.H. Marx; Shi-Jean S. Sung; J.S. Cunningham; M.D. Thompson; L.M. White

    1995-01-01

    A method was developed to manipulate available soil water on large trees by intercepting thrufall with gutters placed under tree canopies and irrigating the intercepted thrufall onto other trees. With this design, trees were exposed for 2 years to either 25% less thrufall, normal thrufall, or 25% additional thrufall.Undercanopy construction in these plots moderately...

  16. Evaluation of optimal silver amount for the removal of methyl iodide on silver-impregnated adsorbents

    International Nuclear Information System (INIS)

    Park, G.I.; Cho, I.H.; Kim, J.H.; Oh, W.Z.

    2001-01-01

    The adsorption characteristics of methyl iodide generated from the simulated off-gas stream on various adsorbents such as silver-impregnated zeolite (AgX), zeocarbon and activated carbon were investigated. An extensive evaluation was made on the optimal silver impregnation amount for the removal of methyl iodide at temperatures up to 300 deg. C. The degree of adsorption efficiency of methyl iodide on silver-impregnated adsorbent is strongly dependent on impregnation amount and process temperature. A quantitative comparison of adsorption efficiencies on three adsorbents in a fixed bed was investigated. The influence of temperature, methyl iodide concentration and silver impregnation amount on the adsorption efficiency is closely related to the pore characteristics of adsorbents. It shows that the effective impregnation ratio was about 10wt%, based on the degree of silver utilization for the removal of methyl iodide. The practical applicability of silver-impregnated zeolite for the removal of radioiodine generated from the DUPIC process was consequently proposed. (author)

  17. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  18. Large-scale calculations of the beta-decay rates and r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Borzov, I N; Goriely, S [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); Pearson, J M [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); [Lab. de Physique Nucleaire, Univ. de Montreal, Montreal (Canada)

    1998-06-01

    An approximation to a self-consistent model of the ground state and {beta}-decay properties of neutron-rich nuclei is outlined. The structure of the {beta}-strength functions in stable and short-lived nuclei is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations and recent experimental data. (orig.)

  19. Process optimization of large-scale production of recombinant adeno-associated vectors using dielectric spectroscopy.

    Science.gov (United States)

    Negrete, Alejandro; Esteban, Geoffrey; Kotin, Robert M

    2007-09-01

    A well-characterized manufacturing process for the large-scale production of recombinant adeno-associated vectors (rAAV) for gene therapy applications is required to meet current and future demands for pre-clinical and clinical studies and potential commercialization. Economic considerations argue in favor of suspension culture-based production. Currently, the only feasible method for large-scale rAAV production utilizes baculovirus expression vectors and insect cells in suspension cultures. To maximize yields and achieve reproducibility between batches, online monitoring of various metabolic and physical parameters is useful for characterizing early stages of baculovirus-infected insect cells. In this study, rAAVs were produced at 40-l scale yielding ~1 x 10(15) particles. During the process, dielectric spectroscopy was performed by real time scanning in radio frequencies between 300 kHz and 10 MHz. The corresponding permittivity values were correlated with the rAAV production. Both infected and uninfected reached a maximum value; however, only infected cell cultures permittivity profile reached a second maximum value. This effect was correlated with the optimal harvest time for rAAV production. Analysis of rAAV indicated the harvesting time around 48 h post-infection (hpi), and 72 hpi produced similar quantities of biologically active rAAV. Thus, if operated continuously, the 24-h reduction in the production process of rAAV gives sufficient time for additional 18 runs a year corresponding to an extra production of ~2 x 10(16) particles. As part of large-scale optimization studies, this new finding will facilitate the bioprocessing scale-up of rAAV and other bioproducts.

  20. 24 CFR 201.10 - Loan amounts.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Loan amounts. 201.10 Section 201.10... MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES TITLE I PROPERTY IMPROVEMENT AND MANUFACTURED HOME LOANS Loan and Note Provisions § 201.10 Loan amounts. (a) Property...

  1. Forest Policy Scenario Analysis: Sensitivity of Songbird Community to Changes in Forest Cover Amount and Configuration

    Directory of Open Access Journals (Sweden)

    Robert S. Rempel

    2007-06-01

    Full Text Available Changes in mature forest cover amount, composition, and configuration can be of significant consequence to wildlife populations. The response of wildlife to forest patterns is of concern to forest managers because it lies at the heart of such competing approaches to forest planning as aggregated vs. dispersed harvest block layouts. In this study, we developed a species assessment framework to evaluate the outcomes of forest management scenarios on biodiversity conservation objectives. Scenarios were assessed in the context of a broad range of forest structures and patterns that would be expected to occur under natural disturbance and succession processes. Spatial habitat models were used to predict the effects of varying degrees of mature forest cover amount, composition, and configuration on habitat occupancy for a set of 13 focal songbird species. We used a spatially explicit harvest scheduling program to model forest management options and simulate future forest conditions resulting from alternative forest management scenarios, and used a process-based fire-simulation model to simulate future forest conditions resulting from natural wildfire disturbance. Spatial pattern signatures were derived for both habitat occupancy and forest conditions, and these were placed in the context of the simulated range of natural variation. Strategic policy analyses were set in the context of current Ontario forest management policies. This included use of sequential time-restricted harvest blocks (created for Woodland caribou (Rangifer tarandus conservation and delayed harvest areas (created for American marten (Martes americana atrata conservation. This approach increased the realism of the analysis, but reduced the generality of interpretations. We found that forest management options that create linear strips of old forest deviate the most from simulated natural patterns, and had the greatest negative effects on habitat occupancy, whereas policy options

  2. An Open Source-Based Real-Time Data Processing Architecture Framework for Manufacturing Sustainability

    Directory of Open Access Journals (Sweden)

    Muhammad Syafrudin

    2017-11-01

    Full Text Available Currently, the manufacturing industry is experiencing a data-driven revolution. There are multiple processes in the manufacturing industry and will eventually generate a large amount of data. Collecting, analyzing and storing a large amount of data are one of key elements of the smart manufacturing industry. To ensure that all processes within the manufacturing industry are functioning smoothly, the big data processing is needed. Thus, in this study an open source-based real-time data processing (OSRDP architecture framework was proposed. OSRDP architecture framework consists of several open sources technologies, including Apache Kafka, Apache Storm and NoSQL MongoDB that are effective and cost efficient for real-time data processing. Several experiments and impact analysis for manufacturing sustainability are provided. The results showed that the proposed system is capable of processing a massive sensor data efficiently when the number of sensors data and devices increases. In addition, the data mining based on Random Forest is presented to predict the quality of products given the sensor data as the input. The Random Forest successfully classifies the defect and non-defect products, and generates high accuracy compared to other data mining algorithms. This study is expected to support the management in their decision-making for product quality inspection and support manufacturing sustainability.

  3. Comparing fixed-amount and progressive-amount DRO Schedules for tic suppression in youth with chronic tic disorders.

    Science.gov (United States)

    Capriotti, Matthew R; Turkel, Jennifer E; Johnson, Rachel A; Espil, Flint M; Woods, Douglas W

    2017-01-01

    Chronic tic disorders (CTDs) involve motor and/or vocal tics that often cause substantial distress and impairment. Differential reinforcement of other behavior (DRO) schedules of reinforcement produce robust, but incomplete, reductions in tic frequency in youth with CTDs; however, a more robust reduction may be needed to affect durable clinical change. Standard, fixed-amount DRO schedules have not commonly yielded such reductions, so we evaluated a novel, progressive-amount DRO schedule, based on its ability to facilitate sustained abstinence from functionally similar behaviors. Five youth with CTDs were exposed to periods of baseline, fixed-amount DRO (DRO-F), and progressive-amount DRO (DRO-P). Both DRO schedules produced decreases in tic rate and increases in intertic interval duration, but no systematic differences were seen between the two schedules on any dimension of tic occurrence. The DRO-F schedule was generally preferred to the DRO-P schedule. Possible procedural improvements and other future directions are discussed. © 2016 Society for the Experimental Analysis of Behavior.

  4. Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount

    Energy Technology Data Exchange (ETDEWEB)

    Gervasio, Vivianaluxa [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kim, Dong-Sang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kruger, Albert A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2018-02-19

    Analyses were performed to evaluate the impacts of using the advanced glass models, constraints (Vienna et al. 2016), and uncertainty descriptions on projected Hanford glass mass. The maximum allowable WOL was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of IHLW glass when no uncertainties were taken into accound. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increase in estimated glass mass 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). ILAW mass was predicted to be 282,350 MT without uncertainty and with weaste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MTG. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.

  5. Research on key technologies of data processing in internet of things

    Science.gov (United States)

    Zhu, Yangqing; Liang, Peiying

    2017-08-01

    The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.

  6. Increasing the amount of usual rehabilitation improves activity after stroke: a systematic review

    Directory of Open Access Journals (Sweden)

    Emma J Schneider

    2016-10-01

    Full Text Available Questions: In people receiving rehabilitation aimed at reducing activity limitations of the lower and/or upper limb after stroke, does adding extra rehabilitation (of the same content as the usual rehabilitation improve activity? What is the amount of extra rehabilitation that needs to be provided to achieve a beneficial effect? Design: Systematic review with meta-analysis of randomised trials. Participants: Adults aged 18 years or older that had a diagnosis of stroke. Intervention: Extra rehabilitation with the same content as usual rehabilitation aimed at reducing activity limitations of the lower and/or upper limb. Outcome measures: Activity measured as lower or upper limb ability. Results: A total of 14 studies, comprising 15 comparisons, met the inclusion criteria. Pooling data from all the included studies showed that extra rehabilitation improved activity immediately after the intervention period (SMD = 0.39, 95% CI 0.07 to 0.71, I2 = 66%. When only studies with a large increase in rehabilitation (> 100% were included, the effect was greater (SMD 0.59, 95% CI 0.23 to 0.94, I2 = 44%. There was a trend towards a positive relationship (r = 0.53, p = 0.09 between extra rehabilitation and improved activity. The turning point on the ROC curve of false versus true benefit (AUC = 0.88, p = 0.04 indicated that at least an extra 240% of rehabilitation was needed for significant likelihood that extra rehabilitation would improve activity. Conclusion: Increasing the amount of usual rehabilitation aimed at reducing activity limitations improves activity in people after stroke. The amount of extra rehabilitation that needs to be provided to achieve a beneficial effect is large. Trial registration: PROSPERO CRD42012003221. [Schneider EJ, Lannin NA, Ada L, Schmidt J (2016 Increasing the amount of usual rehabilitation improves activity after stroke: a systematic review. Journal of Physiotherapy 62: 182–187

  7. Large-scale production of diesel-like biofuels - process design as an inherent part of microorganism development.

    Science.gov (United States)

    Cuellar, Maria C; Heijnen, Joseph J; van der Wielen, Luuk A M

    2013-06-01

    Industrial biotechnology is playing an important role in the transition to a bio-based economy. Currently, however, industrial implementation is still modest, despite the advances made in microorganism development. Given that the fuels and commodity chemicals sectors are characterized by tight economic margins, we propose to address overall process design and efficiency at the start of bioprocess development. While current microorganism development is targeted at product formation and product yield, addressing process design at the start of bioprocess development means that microorganism selection can also be extended to other critical targets for process technology and process scale implementation, such as enhancing cell separation or increasing cell robustness at operating conditions that favor the overall process. In this paper we follow this approach for the microbial production of diesel-like biofuels. We review current microbial routes with both oleaginous and engineered microorganisms. For the routes leading to extracellular production, we identify the process conditions for large scale operation. The process conditions identified are finally translated to microorganism development targets. We show that microorganism development should be directed at anaerobic production, increasing robustness at extreme process conditions and tailoring cell surface properties. All the same time, novel process configurations integrating fermentation and product recovery, cell reuse and low-cost technologies for product separation are mandatory. This review provides a state-of-the-art summary of the latest challenges in large-scale production of diesel-like biofuels. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  9. Processors and systems (picture processing)

    Energy Technology Data Exchange (ETDEWEB)

    Gemmar, P

    1983-01-01

    Automatic picture processing requires high performance computers and high transmission capacities in the processor units. The author examines the possibilities of operating processors in parallel in order to accelerate the processing of pictures. He therefore discusses a number of available processors and systems for picture processing and illustrates their capacities for special types of picture processing. He stresses the fact that the amount of storage required for picture processing is exceptionally high. The author concludes that it is as yet difficult to decide whether very large groups of simple processors or highly complex multiprocessor systems will provide the best solution. Both methods will be aided by the development of VLSI. New solutions have already been offered (systolic arrays and 3-d processing structures) but they also are subject to losses caused by inherently parallel algorithms. Greater efforts must be made to produce suitable software for multiprocessor systems. Some possibilities for future picture processing systems are discussed. 33 references.

  10. A methodology for fault diagnosis in large chemical processes and an application to a multistage flash desalination process: Part I

    International Nuclear Information System (INIS)

    Tarifa, Enrique E.; Scenna, Nicolas J.

    1998-01-01

    This work presents a new strategy for fault diagnosis in large chemical processes (E.E. Tarifa, Fault diagnosis in complex chemistries plants: plants of large dimensions and batch processes. Ph.D. thesis, Universidad Nacional del Litoral, Santa Fe, 1995). A special decomposition of the plant is made in sectors. Afterwards each sector is studied independently. These steps are carried out in the off-line mode. They produced vital information for the diagnosis system. This system works in the on-line mode and is based on a two-tier strategy. When a fault is produced, the upper level identifies the faulty sector. Then, the lower level carries out an in-depth study that focuses only on the critical sectors to identify the fault. The loss of information produced by the process partition may cause spurious diagnosis. This problem is overcome at the second level using qualitative simulation and fuzzy logic. In the second part of this work, the new methodology is tested to evaluate its performance in practical cases. A multiple stage flash desalination system (MSF) is chosen because it is a complex system, with many recycles and variables to be supervised. The steps for the knowledge base generation and all the blocks included in the diagnosis system are analyzed. Evaluation of the diagnosis performance is carried out using a rigorous dynamic simulator

  11. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process

    Directory of Open Access Journals (Sweden)

    Gelayol Golkarnarenji

    2018-03-01

    Full Text Available To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR and Artificial Neural Network (ANN, were studied and compared, with a limited dataset obtained to predict physical property (density of oxidative stabilized PAN fiber (OPF in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.

  12. E-health, phase two: the imperative to integrate process automation with communication automation for large clinical reference laboratories.

    Science.gov (United States)

    White, L; Terner, C

    2001-01-01

    The initial efforts of e-health have fallen far short of expectations. They were buoyed by the hype and excitement of the Internet craze but limited by their lack of understanding of important market and environmental factors. E-health now recognizes that legacy systems and processes are important, that there is a technology adoption process that needs to be followed, and that demonstrable value drives adoption. Initial e-health transaction solutions have targeted mostly low-cost problems. These solutions invariably are difficult to integrate into existing systems, typically requiring manual interfacing to supported processes. This limitation in particular makes them unworkable for large volume providers. To meet the needs of these providers, e-health companies must rethink their approaches, appropriately applying technology to seamlessly integrate all steps into existing business functions. E-automation is a transaction technology that automates steps, integration of steps, and information communication demands, resulting in comprehensive automation of entire business functions. We applied e-automation to create a billing management solution for clinical reference laboratories. Large volume, onerous regulations, small margins, and only indirect access to patients challenge large laboratories' billing departments. Couple these problems with outmoded, largely manual systems and it becomes apparent why most laboratory billing departments are in crisis. Our approach has been to focus on the most significant and costly problems in billing: errors, compliance, and system maintenance and management. The core of the design relies on conditional processing, a "universal" communications interface, and ASP technologies. The result is comprehensive automation of all routine processes, driving out errors and costs. Additionally, compliance management and billing system support and management costs are dramatically reduced. The implications of e-automated processes can extend

  13. Impacts of Process and Prediction Uncertainties on Projected Hanford Waste Glass Amount

    Energy Technology Data Exchange (ETDEWEB)

    Gervasio, V.; Kim, D. S.; Vienna, J. D.; Kruger, A. A.

    2018-03-08

    Analyses were performed to evaluate the impacts of using the advanced glass models, constraints (Vienna et al. 2016), and uncertainty descriptions on projected Hanford glass mass. The maximum allowable waste oxide loading (WOL) was estimated for waste compositions while simultaneously satisfying all applicable glass property and composition constraints with sufficient confidence. Different components of prediction and composition/process uncertainties were systematically included in the calculations to evaluate their impacts on glass mass. The analyses estimated the production of 23,360 MT of immobilized high-level waste (IHLW) glass when no uncertainties were taken into account. Accounting for prediction and composition/process uncertainties resulted in 5.01 relative percent increase in estimated glass mass of 24,531 MT. Roughly equal impacts were found for prediction uncertainties (2.58 RPD) and composition/process uncertainties (2.43 RPD). The immobilized low-activity waste (ILAW) mass was predicted to be 282,350 MT without uncertainty and with waste loading “line” rules in place. Accounting for prediction and composition/process uncertainties resulted in only 0.08 relative percent increase in estimated glass mass of 282,562 MT. Without application of line rules the glass mass decreases by 10.6 relative percent (252,490 MT) for the case with no uncertainties. Addition of prediction uncertainties increases glass mass by 1.32 relative percent and the addition of composition/process uncertainties increase glass mass by an additional 7.73 relative percent (9.06 relative percent increase combined). The glass mass estimate without line rules (275,359 MT) was 2.55 relative percent lower than that with the line rules (282,562 MT), after accounting for all applicable uncertainties.

  14. Influence of ultrasonic energy on dispersion of aggregates and released amounts of organic matter and polyvalent cations

    Science.gov (United States)

    Kaiser, M.; Kleber, M.; Berhe, A. A.

    2010-12-01

    Aggregates play important roles in soil carbon storage and stabilization. Identification of scale-dependent mechanisms of soil aggregate formation and stability is necessary to predict and eventually manage the flow of carbon through terrestrial ecosystems. Application of ultrasonic energy is a common tool to disperse soil aggregates. In this study, we used ultra sonic energy (100 to 2000 J cm-3) to determine the amount of polyvalent cations and organic matter involved in aggregation processes in three arable and three forest soils that varied in soil mineral composition. To determine the amount of organic matter and cations released after application of different amount of ultrasonic energy, we removed the coarse fraction (>250 µm). The remaining residue (solid residue freeze dried before we analyzed the amounts of water-extracted organic carbon (OC), Fe, Al, Ca, Mn, and Mg in the filtrates. The extracted OM and solid residues were further characterized by Fourier Transformed Infra Red spectroscopy and Scanning Electron Microscopy. Our results show a linear increase in amount of dissolved OC with increasing amounts of ultra sonic energy up to 1500 J cm-3 indicating maximum dispersion of soil aggregates at this energy level independent from soil type or land use. In contrast to Mn, and Mg, the amounts of dissolved Ca, Fe, and Al increase with increasing ultra sonic energy up to 1500 J cm-3. At 1500 J cm-3, the absolute amounts of OC, Ca, Fe, and Al released were specific for each soil type, likely indicating differences in type of OM-mineral interactions involved in micro-scaled aggregation processes. The amounts of dissolved Fe, and Al released after an application of 1500 J cm-3 are not related to oxalate- and dithionite- extractable, or total Al content indicating less disintegration of pedogenic oxides or clay minerals due to high levels of ultrasonic energy.

  15. Intelligent multivariate process supervision

    International Nuclear Information System (INIS)

    Visuri, Pertti.

    1986-01-01

    This thesis addresses the difficulties encountered in managing large amounts of data in supervisory control of complex systems. Some previous alarm and disturbance analysis concepts are reviewed and a method for improving the supervision of complex systems is presented. The method, called multivariate supervision, is based on adding low level intelligence to the process control system. By using several measured variables linked together by means of deductive logic, the system can take into account the overall state of the supervised system. Thus, it can present to the operators fewer messages with higher information content than the conventional control systems which are based on independent processing of each variable. In addition, the multivariate method contains a special information presentation concept for improving the man-machine interface. (author)

  16. Modelling the Pultrusion Process of Off Shore Wind Turbine Blades

    DEFF Research Database (Denmark)

    Baran, Ismet

    together with the thermal and cure developments are addressed. A detailed survey on pultrusion is presented including numerical and experimental studies available in the literature since the 1980s. Keeping the multi-physics and large amount of variables involved in the pultrusion process in mind...... and shape distortions in the pultrusion process. Together these models present a thermo-chemical-mechanical model framework for the process which is unprecedented in literature. In this framework, the temperature and degree of cure fields already calculated in the thermo-chemical model are mapped...

  17. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  18. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    Science.gov (United States)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  19. The Faculty Promotion Process. An Empirical Analysis of the Administration of Large State Universities.

    Science.gov (United States)

    Luthans, Fred

    One phase of academic management, the faculty promotion process, is systematically described and analyzed. The study encompasses three parts: (l) the justification of the use of management concepts in the analysis of academic administration; (2) a descriptive presentation of promotion policies and practices in 46 large state universities; and (3)…

  20. Determination of micro amounts of praseodymium by analogue derivative spectrophotometry

    International Nuclear Information System (INIS)

    Ishii, Hajime; Satoh, Katsuhiko.

    1986-01-01

    Derivative spectrophotometry using the analogue differentiation circuit was applied to the determination of praseodymium at the ppm level. By the proposed method, in which the second or fourth derivative spectrum of the characteristic absorption band of praseodymium(III) at 444 nm is measured, as little as 3 ppm of praseodymium can be determined directly and easily even in the presence of large amounts of other rare earths without any prior separation. Interferences from neodymium, samarium, dysprosium, holmium and erbium ions which have characteristic absorption bands around 444 nm can easily be removed by utilizing the isosbestic point in the derivative spectra of praseodymium(III) and the interfering rare earth(III). (author)

  1. 26 CFR 1.465-20 - Treatment of amounts borrowed from certain persons and amounts protected against loss.

    Science.gov (United States)

    2010-04-01

    ... not increase the taxpayer's amount at risk because they are borrowed from a person who has an interest in the activity other than that of a creditor or from a person who is related to a person (other than the taxpayer) who has an interest in the activity other than that of a creditor; and (2) Amounts...

  2. Research on photodiode detector-based spatial transient light detection and processing system

    Science.gov (United States)

    Liu, Meiying; Wang, Hu; Liu, Yang; Zhao, Hui; Nan, Meng

    2016-10-01

    In order to realize real-time signal identification and processing of spatial transient light, the features and the energy of the captured target light signal are first described and quantitatively calculated. Considering that the transient light signal has random occurrence, a short duration and an evident beginning and ending, a photodiode detector based spatial transient light detection and processing system is proposed and designed in this paper. This system has a large field of view and is used to realize non-imaging energy detection of random, transient and weak point target under complex background of spatial environment. Weak signal extraction under strong background is difficult. In this paper, considering that the background signal changes slowly and the target signal changes quickly, filter is adopted for signal's background subtraction. A variable speed sampling is realized by the way of sampling data points with a gradually increased interval. The two dilemmas that real-time processing of large amount of data and power consumption required by the large amount of data needed to be stored are solved. The test results with self-made simulative signal demonstrate the effectiveness of the design scheme. The practical system could be operated reliably. The detection and processing of the target signal under the strong sunlight background was realized. The results indicate that the system can realize real-time detection of target signal's characteristic waveform and monitor the system working parameters. The prototype design could be used in a variety of engineering applications.

  3. ALSAN - A system for disturbance analysis by process computers

    International Nuclear Information System (INIS)

    Felkel, L.; Grumbach, R.

    1977-05-01

    The program system ALSAN has been developed to process the large number of signals due to a disturbance in a complex technical process, to recognize the important (in order to settle the disturbance within a minimum amount of time) information, and to display it to the operators. By means of the results, clear decisions can be made on what counteractions have to be taken. The system works in on-line-open-loop mode, and analyses disturbances autonomously as well as in dialog with the operators. (orig.) [de

  4. Large-scale additive manufacturing with bioinspired cellulosic materials.

    Science.gov (United States)

    Sanandiya, Naresh D; Vijay, Yadunund; Dimopoulou, Marina; Dritsas, Stylianos; Fernandez, Javier G

    2018-06-05

    Cellulose is the most abundant and broadly distributed organic compound and industrial by-product on Earth. However, despite decades of extensive research, the bottom-up use of cellulose to fabricate 3D objects is still plagued with problems that restrict its practical applications: derivatives with vast polluting effects, use in combination with plastics, lack of scalability and high production cost. Here we demonstrate the general use of cellulose to manufacture large 3D objects. Our approach diverges from the common association of cellulose with green plants and it is inspired by the wall of the fungus-like oomycetes, which is reproduced introducing small amounts of chitin between cellulose fibers. The resulting fungal-like adhesive material(s) (FLAM) are strong, lightweight and inexpensive, and can be molded or processed using woodworking techniques. We believe this first large-scale additive manufacture with ubiquitous biological polymers will be the catalyst for the transition to environmentally benign and circular manufacturing models.

  5. Rare behavior of growth processes via umbrella sampling of trajectories

    Science.gov (United States)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  6. Development of polymers for large scale roll-to-roll processing of polymer solar cells

    DEFF Research Database (Denmark)

    Carlé, Jon Eggert

    Development of polymers for large scale roll-to-roll processing of polymer solar cells Conjugated polymers potential to both absorb light and transport current as well as the perspective of low cost and large scale production has made these kinds of material attractive in solar cell research....... The research field of polymer solar cells (PSCs) is rapidly progressing along three lines: Improvement of efficiency and stability together with the introduction of large scale production methods. All three lines are explored in this work. The thesis describes low band gap polymers and why these are needed....... Polymer of this type display broader absorption resulting in better overlap with the solar spectrum and potentially higher current density. Synthesis, characterization and device performance of three series of polymers illustrating how the absorption spectrum of polymers can be manipulated synthetically...

  7. Modelling financial markets with agents competing on different time scales and with different amount of information

    Science.gov (United States)

    Wohlmuth, Johannes; Andersen, Jørgen Vitting

    2006-05-01

    We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.

  8. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    Science.gov (United States)

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  9. Study of Drell-Yan process in CMS experiment at Large Hadron Collider

    CERN Document Server

    Jindal, Monika

    The proton-proton collisions at the Large Hadron Collider (LHC) is the begining of a new era in the high energy physics. It enables the possibility of the discoveries at high-energy frontier and also allows the study of Standard Model physics with high precision. The new physics discoveries and the precision measurements can be achieved with highly efficient and accurate detectors like Compact Muon Solenoid. In this thesis, we report the measurement of the differential production cross-section of the Drell-Yan process, $q ar{q} ightarrow Z/gamma^{*} ightarrowmu^{+}mu^{-}$ in proton-proton collisions at the center-of-mass energy $sqrt{s}=$ 7 TeV using CMS experiment at the LHC. This measurement is based on the analysis of data which corresponds to an integrated luminosity of $intmath{L}dt$ = 36.0 $pm$ 1.4 pb$^{-1}$. The measurement of the production cross-section of the Drell-Yan process provides a first test of the Standard Model in a new energy domain and may reveal exotic physics processes. The Drell...

  10. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  11. Investigation of deep inelastic scattering processes involving large p$_{t}$ direct photons in the final state

    CERN Multimedia

    2002-01-01

    This experiment will investigate various aspects of photon-parton scattering and will be performed in the H2 beam of the SPS North Area with high intensity hadron beams up to 350 GeV/c. \\\\\\\\ a) The directly produced photon yield in deep inelastic hadron-hadron collisions. Large p$_{t}$ direct photons from hadronic interactions are presumably a result of a simple annihilation process of quarks and antiquarks or of a QCD-Compton process. The relative contribution of the two processes can be studied by using various incident beam projectiles $\\pi^{+}, \\pi^{-}, p$ and in the future $\\bar{p}$. \\\\\\\\b) The correlations between directly produced photons and their accompanying hadronic jets. We will examine events with a large p$_{t}$ direct photon for away-side jets. If jets are recognised their properties will be investigated. Differences between a gluon and a quark jet may become observable by comparing reactions where valence quark annihilations (away-side jet originates from a gluon) dominate over the QDC-Compton...

  12. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  13. COMPUTER PROCESSING OF MICROSTRUCTURES OF IRON WITH DIFFERENT INCLUSIONS AMOUNTS OF LAMELLAR AND SPHERICAL GRAPHITE

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2013-01-01

    Full Text Available Based on cast iron microstructures with different amounts of impurities of plastic and nodular graphite given in CCITT 3443-87 “Cast iron with various forms of graphite. Methods for determining the structure “shows the possibilities of automated quantitative analysis of microstructures SG2, PG4, PG6, PG10, PG12 (Plastic Box and SHG2, SHG4, SHG6, SHG10, SHG12 (spheroidal graphite, which allows the development of methods for the determination of impurities of plastic and spherical graphite according to the microstructures image under the light microscope.

  14. Some Examples of Residence-Time Distribution Studies in Large-Scale Chemical Processes by Using Radiotracer Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)

    1967-06-15

    The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)

  15. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex

    OpenAIRE

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-01-01

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-makin...

  16. Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6

    Science.gov (United States)

    Miller, R. H.; Smith, D. B. S.

    1979-01-01

    Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.

  17. Large earthquake rupture process variations on the Middle America megathrust

    Science.gov (United States)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  18. A novel two-level dynamic parallel data scheme for large 3-D SN calculations

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.

    2005-01-01

    We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)

  19. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    Science.gov (United States)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also

  20. The Processing Using Memory Paradigm:In-DRAM Bulk Copy, Initialization, Bitwise AND and OR

    OpenAIRE

    Seshadri, Vivek; Mutlu, Onur

    2016-01-01

    In existing systems, the off-chip memory interface allows the memory controller to perform only read or write operations. Therefore, to perform any operation, the processor must first read the source data and then write the result back to memory after performing the operation. This approach consumes high latency, bandwidth, and energy for operations that work on a large amount of data. Several works have proposed techniques to process data near memory by adding a small amount of compute logic...

  1. 12 CFR 347.120 - Computation of investment amounts.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Computation of investment amounts. 347.120... GENERAL POLICY INTERNATIONAL BANKING § 347.120 Computation of investment amounts. In computing the amount that may be invested in any foreign organization under §§ 347.117 through 347.119, any investments held...

  2. 7 CFR 1710.107 - Amount lent for acquisitions.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Amount lent for acquisitions. 1710.107 Section 1710... GUARANTEES Loan Purposes and Basic Policies § 1710.107 Amount lent for acquisitions. The maximum amount that will be lent for an acquisition is limited to the value of the property, as determined by RUS. If the...

  3. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  4. 24 CFR 232.565 - Maximum loan amount.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES MORTGAGE INSURANCE FOR NURSING HOMES, INTERMEDIATE CARE FACILITIES, BOARD AND CARE HOMES, AND ASSISTED... Fire Safety Equipment Eligible Security Instruments § 232.565 Maximum loan amount. The principal amount...

  5. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  6. Structural and chemical analysis of process residue from biochemical conversion of wheat straw (Triticum aestivum L.) to ethanol

    DEFF Research Database (Denmark)

    Hansen, Mads Anders Tengstedt; Jørgensen, Henning; Laursen, Kristian Holst

    2013-01-01

    Biochemical conversion of lignocellulose to fermentable carbohydrates for ethanol production is now being implemented in large-scale industrial production. Applying hydrothermal pretreatment and enzymatic hydrolysis for the conversion process, a residue containing substantial amounts of lignin...

  7. Influences of large-scale convection and moisture source on monthly precipitation isotope ratios observed in Thailand, Southeast Asia

    Science.gov (United States)

    Wei, Zhongwang; Lee, Xuhui; Liu, Zhongfang; Seeboonruang, Uma; Koike, Masahiro; Yoshimura, Kei

    2018-04-01

    Many paleoclimatic records in Southeast Asia rely on rainfall isotope ratios as proxies for past hydroclimatic variability. However, the physical processes controlling modern rainfall isotopic behaviors in the region is poorly constrained. Here, we combined isotopic measurements at six sites across Thailand with an isotope-incorporated atmospheric circulation model (IsoGSM) and the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to investigate the factors that govern the variability of precipitation isotope ratios in this region. Results show that rainfall isotope ratios are both correlated with local rainfall amount and regional outgoing longwave radiation, suggesting that rainfall isotope ratios in this region are controlled not only by local rain amount (amount effect) but also by large-scale convection. As a transition zone between the Indian monsoon and the western North Pacific monsoon, the spatial difference of observed precipitation isotope among different sites are associated with moisture source. These results highlight the importance of regional processes in determining rainfall isotope ratios in the tropics and provide constraints on the interpretation of paleo-precipitation isotope records in the context of regional climate dynamics.

  8. Plasma processing of large curved surfaces for superconducting rf cavity modification

    Directory of Open Access Journals (Sweden)

    J. Upadhyay

    2014-12-01

    Full Text Available Plasma-based surface modification of niobium is a promising alternative to wet etching of superconducting radio frequency (SRF cavities. We have demonstrated surface layer removal in an asymmetric nonplanar geometry, using a simple cylindrical cavity. The etching rate is highly correlated with the shape of the inner electrode, radio-frequency (rf circuit elements, gas pressure, rf power, chlorine concentration in the Cl_{2}/Ar gas mixtures, residence time of reactive species, and temperature of the cavity. Using variable radius cylindrical electrodes, large-surface ring-shaped samples, and dc bias in the external circuit, we have measured substantial average etching rates and outlined the possibility of optimizing plasma properties with respect to maximum surface processing effect.

  9. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  10. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  11. 24 CFR 891.525 - Amount and terms of financing.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Amount and terms of financing. 891... Handicapped-Section 8 Assistance § 891.525 Amount and terms of financing. (a) The amount of financing approved... financing provided shall not exceed the lesser of: (1) The dollar amounts stated in paragraphs (b) through...

  12. High-Rate Fabrication of a-Si-Based Thin-Film Solar Cells Using Large-Area VHF PECVD Processes

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Xunming [University of Toledo; Fan, Qi Hua

    2011-12-31

    The University of Toledo (UT), working in concert with it’s a-Si-based PV industry partner Xunlight Corporation (Xunlight), has conducted a comprehensive study to develop a large-area (3ft x 3ft) VHF PECVD system for high rate uniform fabrication of silicon absorber layers, and the large-area VHF PECVD processes to achieve high performance a-Si/a-SiGe or a-Si/nc-Si tandem junction solar cells during the period of July 1, 2008 to Dec. 31, 2011, under DOE Award No. DE-FG36-08GO18073. The project had two primary goals: (i) to develop and improve a large area (3 ft × 3 ft) VHF PECVD system for high rate fabrication of > = 8 Å/s a-Si and >= 20 Å/s nc-Si or 4 Å/s a-SiGe absorber layers with high uniformity in film thicknesses and in material structures. (ii) to develop and optimize the large-area VHF PECVD processes to achieve high-performance a-Si/nc-Si or a-Si/a-SiGe tandem-junction solar cells with >= 10% stable efficiency. Our work has met the goals and is summarized in “Accomplishments versus goals and objectives”.

  13. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  14. Superior PSZ-SOD Gap-Fill Process Integration Using Ultra-Low Dispensation Amount in STI for 28 nm NAND Flash Memory and Beyond

    Directory of Open Access Journals (Sweden)

    Chun Chi Lai

    2015-01-01

    Full Text Available The gap-fill performance and process of perhydropolysilazane-based inorganic spin-on dielectric (PSZ-SOD film in shallow trench isolation (STI with the ultra-low dispensation amount of PSZ-SOD solution have been investigated in this study. A PSZ-SOD film process includes liner deposition, PSZ-SOD coating, and furnace curing. For liner deposition, hydrophilic property is required to improve the contact angle and gap-fill capability of PSZ-SOD coating. Prior to PSZ-SOD coating, the additional treatment on liner surface is beneficial for the fluidity of PSZ-SOD solution. The superior film thickness uniformity and gap-fill performance of PSZ-SOD film are achieved due to the improved fluidity of PSZ-SOD solution. Following that up, the low dispensation rate of PSZ-SOD solution leads to more PSZ-SOD filling in the trenches. After PSZ-SOD coating, high thermal curing process efficiently promotes PSZ-SOD film conversion into silicon oxide. Adequate conversion from PSZ-SOD into silicon oxide further increases the etching resistance inside the trenches. Integrating the above sequence of optimized factors, void-free gap-fill and well-controlled STI recess uniformity are achieved even when the PSZ-SOD solution dispensation volume is reduced 3 to 6 times compared with conventional condition for the 28 nm node NAND flash and beyond.

  15. Process mining in the large : a tutorial

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Zimányi, E.

    2014-01-01

    Recently, process mining emerged as a new scientific discipline on the interface between process models and event data. On the one hand, conventional Business Process Management (BPM) and Workflow Management (WfM) approaches and tools are mostly model-driven with little consideration for event data.

  16. A framework for the direct evaluation of large deviations in non-Markovian processes

    International Nuclear Information System (INIS)

    Cavallaro, Massimo; Harris, Rosemary J

    2016-01-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means. (letter)

  17. An Analysis of the Number of Medical Malpractice Claims and Their Amounts.

    Directory of Open Access Journals (Sweden)

    Marco Bonetti

    Full Text Available Starting from an extensive database, pooling 9 years of data from the top three insurance brokers in Italy, and containing 38125 reported claims due to alleged cases of medical malpractice, we use an inhomogeneous Poisson process to model the number of medical malpractice claims in Italy. The intensity of the process is allowed to vary over time, and it depends on a set of covariates, like the size of the hospital, the medical department and the complexity of the medical operations performed. We choose the combination medical department by hospital as the unit of analysis. Together with the number of claims, we also model the associated amounts paid by insurance companies, using a two-stage regression model. In particular, we use logistic regression for the probability that a claim is closed with a zero payment, whereas, conditionally on the fact that an amount is strictly positive, we make use of lognormal regression to model it as a function of several covariates. The model produces estimates and forecasts that are relevant to both insurance companies and hospitals, for quality assurance, service improvement and cost reduction.

  18. The peculiarities of large intron splicing in animals.

    Directory of Open Access Journals (Sweden)

    Samuel Shepard

    Full Text Available In mammals a considerable 92% of genes contain introns, with hundreds and hundreds of these introns reaching the incredible size of over 50,000 nucleotides. These "large introns" must be spliced out of the pre-mRNA in a timely fashion, which involves bringing together distant 5' and 3' acceptor and donor splice sites. In invertebrates, especially Drosophila, it has been shown that larger introns can be spliced efficiently through a process known as recursive splicing-a consecutive splicing from the 5'-end at a series of combined donor-acceptor splice sites called RP-sites. Using a computational analysis of the genomic sequences, we show that vertebrates lack the proper enrichment of RP-sites in their large introns, and, therefore, require some other method to aid splicing. We analyzed over 15,000 non-redundant, large introns from six mammals, 1,600 from chicken and zebrafish, and 560 non-redundant large introns from five invertebrates. Our bioinformatic investigation demonstrates that, unlike the studied invertebrates, the studied vertebrate genomes contain consistently abundant amounts of direct and complementary strand interspersed repetitive elements (mainly SINEs and LINEs that may form stems with each other in large introns. This examination showed that predicted stems are indeed abundant and stable in the large introns of mammals. We hypothesize that such stems with long loops within large introns allow intron splice sites to find each other more quickly by folding the intronic RNA upon itself at smaller intervals and, thus, reducing the distance between donor and acceptor sites.

  19. Nitrogen And Oxygen Amount In Weld After Welding With Micro-Jet Cooling

    Directory of Open Access Journals (Sweden)

    Węgrzyn T.

    2015-06-01

    Full Text Available Micro-jet cooling after welding was tested only for MIG welding process with argon, helium and nitrogen as a shielded gases. A paper presents a piece of information about nitrogen and oxygen in weld after micro-jet cooling. There are put down information about gases that could be chosen both for MIG/MAG welding and for micro-jet process. There were given main information about influence of various micro-jet gases on metallographic structure of steel welds. Mechanical properties of weld was presented in terms of nitrogen and oxygen amount in WMD (weld metal deposit.

  20. Cyclic process for re-use of waste water generated during the production of UO2

    International Nuclear Information System (INIS)

    Crossley, T.J.

    1976-01-01

    The process is described whereby waste water produced during the hydrolysis and ammonium hydroxide treatment of UF 6 to produce ammonium diuranate is recycled for reuse. The solution containing large amounts of ammonia and fluorides and trace amounts of uranium is first treated with lime to precipitate the fluoride. The ammonia is distilled off and recycled to UO 2 F 2 treatment vessel. The CaF 2 precipitate is separated by centrifugation and the aqueous portion is passed through cationic exchange beds

  1. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  2. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  3. Landspotting: Social gaming to collect vast amounts of data for satellite validation

    Science.gov (United States)

    Fritz, S.; Purgathofer, P.; Kayali, F.; Fellner, M.; Wimmer, M.; Sturn, T.; Triebnig, G.; Krause, S.; Schindler, F.; Kollegger, M.; Perger, C.; Dürauer, M.; Haberl, W.; See, L.; McCallum, I.

    2012-04-01

    At present there is no single satellite-derived global land cover product that is accurate enough to provide reliable estimates of forest or cropland area to determine, e.g., how much additional land is available to grow biofuels or to tackle problems of food security. The Landspotting Project aims to improve the quality of this land cover information by vastly increasing the amount of in-situ validation data available for calibration and validation of satellite-derived land cover. The Geo-Wiki (Geo-Wiki.org) system currently allows users to compare three satellite derived land cover products and validate them using Google Earth. However, there is presently no incentive for anyone to provide this data so the amount of validation through Geo-Wiki has been limited. However, recent competitions have proven that incentive driven campaigns can rapidly create large amounts of input. The LandSpotting Project is taking a truly innovative approach through the development of the Landspotting game. The game engages users whilst simultaneously collecting a large amount of in-situ land cover information. The development of the game is informed by the current raft of successful social gaming that is available on the internet and as mobile applications, many of which are geo-spatial in nature. Games that are integrated within a social networking site such as Facebook illustrate the power to reach and continually engage a large number of individuals. The number of active Facebook users is estimated to be greater than 400 million, where 100 million are accessing Facebook from mobile devices. The Landspotting Game has similar game mechanics as the famous strategy game "Civilization" (i.e. build, harvest, research, war, diplomacy, etc.). When a player wishes to make a settlement, they must first classify the land cover over the area they wish to settle. As the game is played on the earth surface with Google Maps, we are able to record and store this land cover/land use classification

  4. Separation and preconcentration trace amounts of gold by using modified organo nanoclay closite 15A

    Directory of Open Access Journals (Sweden)

    Sayed Zia Mohammadi

    2010-01-01

    Full Text Available The application of organo nanoclay 5-(4-dimethylamino-benzylidene rhodanine-immobilized as a new, easily prepared, and stable solid sorbent for preconcentration trace amounts of Au(III ions in aqueous solution is presented. The sorption of Au(III ions was quantitative in the pH range of 2-4, and quantitative desorption occurred instantaneously with 10.0 mL of a mixture containing 0.5 mol L-1 Na2S2O3 and KSCN. Various parameters, such as the effect of pH, breakthrough volume, extraction time, and interference of a large number of anions and cations have been studied. The proposed method has been applied for determination of trace amount of gold in water samples.

  5. Utilization of the MPI Process for in-tank solidification of heel material in large-diameter cylindrical tanks

    Energy Technology Data Exchange (ETDEWEB)

    Kauschinger, J.L.; Lewis, B.E.

    2000-01-01

    A major problem faced by the US Department of Energy is remediation of sludge and supernatant waste in underground storage tanks. Exhumation of the waste is currently the preferred remediation method. However, exhumation cannot completely remove all of the contaminated materials from the tanks. For large-diameter tanks, amounts of highly contaminated ``heel'' material approaching 20,000 gal can remain. Often sludge containing zeolite particles leaves ``sand bars'' of locally contaminated material across the floor of the tank. The best management practices for in-tank treatment (stabilization and immobilization) of wastes require an integrated approach to develop appropriate treatment agents that can be safely delivered and mixed uniformly with sludge. Ground Environmental Services has developed and demonstrated a remotely controlled, high-velocity jet delivery system termed, Multi-Point-Injection (MPI). This robust jet delivery system has been field-deployed to create homogeneous monoliths containing shallow buried miscellaneous waste in trenches [fiscal year (FY) 1995] and surrogate sludge in cylindrical (FY 1998) and long, horizontal tanks (FY 1999). During the FY 1998 demonstration, the MPI process successfully formed a 32-ton uniform monolith of grout and waste surrogates in about 8 min. Analytical data indicated that 10 tons of zeolite-type physical surrogate were uniformly mixed within a 40-in.-thick monolith without lifting the MPI jetting tools off the tank floor. Over 1,000 lb of cohesive surrogates, with consistencies similar to Gunite and Associated Tank (GAAT) TH-4 and Hanford tank sludges, were easily intermixed into the monolith without exceeding a core temperature of 100 F during curing.

  6. The Effects of Cultural Transmission Are Modulated by the Amount of Information Transmitted

    Science.gov (United States)

    Griffiths, Thomas L.; Lewandowsky, Stephan; Kalish, Michael L.

    2013-01-01

    Information changes as it is passed from person to person, with this process of cultural transmission allowing the minds of individuals to shape the information that they transmit. We present mathematical models of cultural transmission which predict that the amount of information passed from person to person should affect the rate at which that…

  7. The review of recent carbonate minerals processing technology

    Science.gov (United States)

    Solihin

    2018-02-01

    Carbonate is one of the groups of minerals that can be found in relatively large amount in the earth crust. The common carbonate minerals are calcium carbonate (calcite, aragonite, depending on its crystal structure), magnesium carbonate (magnesite), calcium-magnesium carbonate (dolomite), and barium carbonate (barite). A large amount of calcite can be found in many places in Indonesia such as Padalarang, Sukabumi, and Tasikmalaya (West Java Provence). Dolomite can be found in a large amount in Gresik, Lamongan, and Tuban (East Java Provence). Magnesite is quite rare in Indonesia, and up to the recent years it can only be found in Padamarang Island (South East Sulawesi Provence). The carbonate has been being exploited through open pit mining activity. Traditionally, calcite can be ground to produce material for brick production, be carved to produce craft product, or be roasted to produce lime for many applications such as raw materials for cement, flux for metal smelting, etc. Meanwhile, dolomite has traditionally been used as a raw material to make brick for local buildings and to make fertilizer for coconut oil plant. Carbonate minerals actually consist of important elements needed by modern application. Calcium is one of the elements needed in artificial bone formation, slow release fertilizer synthesis, dielectric material production, etc. Magnesium is an important material in automotive industry to produce the alloy for vehicle main parts. It is also used as alloying element in the production of special steel for special purpose. Magnesium oxide can be used to produce slow release fertilizer, catalyst and any other modern applications. The aim of this review article is to present in brief the recent technology in processing carbonate minerals. This review covers both the technology that has been industrially proven and the technology that is still in research and development stage. One of the industrially proven technologies to process carbonate mineral is

  8. Derivation of total ozone amounts over Japan from NOAA/TOVS data

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, S; Taguchi, M; Okano, S; Fukunishi, H [Tohoku University, Sendai (Japan). Upper Atmosphere and Space Research Laboratory; Kawamura, H [Tohoku Univ., Sendai (Japan). Center for Atmospheric and Oceanic Studies

    1992-10-25

    A new method for the derivation of the horizontal distribution of total ozone amounts from the brightness temperature data obtained by the HIRS/2 sensor on board the NOAA satelites was developed. This method is based on the regression method considering a transmittance of the ozone layer, and also includes the second-order terms of the brightness temperatures and the transmittance of ozone layer into the regression calculation. The total ozone data obtained by TOMS were used as the true values in determinating the regression coefficients. The transmittance for the slantwise-looking condition was converted into that for the nadir-looking condition using the angle correction method. Subsequently, the angle correction was also made for the brightness temperature using the corrected transmittance. Horizontal distributions of total ozone amounts were derived by this method with around 4% of accuracy for the wide latitudinal region from 15[degree] to 60[degree], including Japan where total ozone varies largely with latitude. It was demonstrated that inclusion of the second-order terms into the regression improves the accuracy of retrieval, especially in the low-latitude regions. 15 refs., 5 figs., 1 tab.

  9. Review of enhanced processes for anaerobic digestion treatment of sewage sludge

    Science.gov (United States)

    Liu, Xinyuan; Han, Zeyu; Yang, Jie; Ye, Tianyi; Yang, Fang; Wu, Nan; Bao, Zhenbo

    2018-02-01

    Great amount of sewage sludge had been produced each year, which led to serious environmental pollution. Many new technologies had been developed recently, but they were hard to be applied in large scales. As one of the traditional technologies, anaerobic fermentation process was capable of obtaining bioenergy by biogas production under the functions of microbes. However, the anaerobic process is facing new challenges due to the low fermentation efficiency caused by the characteristics of sewage sludge itself. In order to improve the energy yield, the enhancement technologies including sewage sludge pretreatment process, co-digestion process, high-solid digestion process and two-stage fermentation process were widely studied in the literatures, which were introduced in this article.

  10. Processing of oats and the impact of processing operations on nutrition and health benefits.

    Science.gov (United States)

    Decker, Eric A; Rose, Devin J; Stewart, Derek

    2014-10-01

    Oats are a uniquely nutritious food as they contain an excellent lipid profile and high amounts of soluble fibre. However, an oat kernel is largely non-digestible and thus must be utilised in milled form to reap its nutritional benefits. Milling is made up of numerous steps, the most important being dehulling to expose the digestible groat, heat processing to inactivate enzymes that cause rancidity, and cutting, rolling or grinding to convert the groat into a product that can be used directly in oatmeal or can be used as a food ingredient in products such as bread, ready-to-eat breakfast cereals and snack bars. Oats can also be processed into oat bran and fibre to obtain high-fibre-containing fractions that can be used in a variety of food products.

  11. Socioeconomic Status Moderates Genetic and Environmental Effects on the Amount of Alcohol Use

    Science.gov (United States)

    Hamdi, Nayla R; Krueger, Robert F.; South, Susan C.

    2015-01-01

    Background Much is unknown about the relationship between socioeconomic status (SES) and alcohol use, including the means by which SES may influence risk for alcohol use. Methods Using a sample of 672 twin pairs (aged 25–74) derived from the MacArthur Foundation Survey of Midlife Development in the United States (MIDUS), the present study examined whether SES, measured by household income and educational attainment, moderates genetic and environmental influences on three indices of alcohol use: amount used, frequency of use, and problem use. Results We found significant moderation for amount of alcohol used. Specifically, genetic effects were greater in low-SES conditions, shared environmental effects (i.e., environmental effects that enhance the similarity of twins from the same families) tended to increase in high-SES conditions, and non-shared environmental effects (i.e., environmental effects that distinguish twins) tended to decrease with SES. This pattern of results was found for both income and education, and it largely replicated at a second wave of assessment spaced nine years after the first. There was virtually no evidence of moderation for either frequency of alcohol use or alcohol problems. Conclusions Our findings indicate that genetic and environmental influences on drinking amount vary as a function of the broader SES context, whereas the etiologies of other drinking phenomena are less affected by this context. Efforts to find the causes underlying the amount of alcohol used are likely to be more successful if such contextual information is taken into account. PMID:25778493

  12. Socioeconomic status moderates genetic and environmental effects on the amount of alcohol use.

    Science.gov (United States)

    Hamdi, Nayla R; Krueger, Robert F; South, Susan C

    2015-04-01

    Much is unknown about the relationship between socioeconomic status (SES) and alcohol use, including the means by which SES may influence risk for alcohol use. Using a sample of 672 twin pairs (aged 25 to 74) derived from the MacArthur Foundation Survey of Midlife Development in the United States, this study examined whether SES, measured by household income and educational attainment, moderates genetic and environmental influences on 3 indices of alcohol use: amount used, frequency of use, and problem use. We found significant moderation for amount of alcohol used. Specifically, genetic effects were greater in low-SES conditions, shared environmental effects (i.e., environmental effects that enhance the similarity of twins from the same families) tended to increase in high-SES conditions, and nonshared environmental effects (i.e., environmental effects that distinguish twins) tended to decrease with SES. This pattern of results was found for both income and education, and it largely replicated at a second wave of assessment spaced 9 years after the first. There was virtually no evidence of moderation for either frequency of alcohol use or alcohol problems. Our findings indicate that genetic and environmental influences on drinking amount vary as a function of the broader SES context, whereas the etiologies of other drinking phenomena are less affected by this context. Efforts to find the causes underlying the amount of alcohol used are likely to be more successful if such contextual information is taken into account. Copyright © 2015 by the Research Society on Alcoholism.

  13. 12 CFR 209.4 - Amounts and payments.

    Science.gov (United States)

    2010-01-01

    ... CANCELLATION OF FEDERAL RESERVE BANK CAPITAL STOCK (REGULATION I) § 209.4 Amounts and payments. (a) Amount of... lesser of 15 percent or 100 shares of its Reserve Bank capital stock, it shall file with the appropriate Reserve Bank an application for issue or cancellation of Reserve Bank capital stock in order to adjust its...

  14. A Transmission-Cost-Based Model to Estimate the Amount of Market-Integrable Wind Resources

    DEFF Research Database (Denmark)

    Morales González, Juan Miguel; Pinson, Pierre; Madsen, Henrik

    2012-01-01

    are made to share the expenses in transmission derived from their integration, they may see the doors of electricity markets closed for not being competitive enough. This paper presents a model to decide the amount of wind resources that are economically exploitable at a given location from a transmission......In the pursuit of the large-scale integration of wind power production, it is imperative to evaluate plausible frictions among the stochastic nature of wind generation, electricity markets, and the investments in transmission required to accommodate larger amounts of wind. If wind producers......-cost perspective. This model accounts for the uncertain character of wind by using a modeling framework based on stochastic optimization, simulates market barriers by means of a bi-level structure, and considers the financial risk of investments in transmission through the conditional value-at-risk. The major...

  15. Neuro-economics in chicks: foraging choices based on amount, delay and cost.

    Science.gov (United States)

    Matsushima, Toshiya; Kawamori, Ai; Bem-Sojka, Tiaza

    2008-06-15

    Studies on the foraging choices are reviewed, with an emphasis on the neural representations of elementary factors of food (i.e., amount, delay and consumption time) in the avian brain. Domestic chicks serve as an ideal animal model in this respect, as they quickly associate cue colors with subsequently supplied food rewards, and their choices are quantitatively linked with the rewards. When a pair of such color cues was simultaneously presented, the trained chicks reliably made choices according to the profitability of food associated with each color. Two forebrain regions are involved in distinct aspects of choices; i.e., nucleus accumbens-medial striatum (Ac-MSt) and arcopallium intermedium (AI), an association area in the lateral forebrain. Localized lesions of Ac-MSt enhanced delay aversion, and the ablated chicks made impulsive choices of immediate reward more frequently than sham controls. On the other hand, lesions of AI enhanced consumption-time aversion, and the ablated chicks shifted their choices toward easily consumable reward with their impulsiveness unchanged; delay and consumption time are thus doubly dissociated. Furthermore, chicks showed distinct patterns of risk-sensitive choices depending on the factor that varied at trials. Risk aversion occurred when food amount varied, whereas consistent risk sensitivity was not found when the delay varied; amount and delay were not interchangeable. Choices are thus deviated from those predicted as optima. Instead, factors such as amount, delay and consumption time could be separately represented and processed to yield economically sub-optimal choices.

  16. Proteomic analysis of minute amount of colonic biopsies by enteroscopy sampling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xing [Department of Analytical Chemistry and CAS Key Laboratory of Receptor Research, Shanghai Institute of Materia Medica, Chinese Academy of Sciences (China); Xu, Yanli [Fuyang People’s Hospital (China); Meng, Qian [Department of Analytical Chemistry and CAS Key Laboratory of Receptor Research, Shanghai Institute of Materia Medica, Chinese Academy of Sciences (China); Zheng, Qingqing [Digestive Endoscopic Center, Shanghai Jiaotong University Affiliated Sixth People’s Hospital (China); Wu, Jianhong [Department of Analytical Chemistry and CAS Key Laboratory of Receptor Research, Shanghai Institute of Materia Medica, Chinese Academy of Sciences (China); Wang, Chen; Jia, Weiping [Shanghai Key Laboratory of Diabetes Mellitus, Department of Endocrinology and Metabolism, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital (China); Figeys, Daniel [Department of Biochemistry, Microbiology and Immunology, and Department of Chemistry and Biomolecular Sciences, University of Ottawa (Canada); Chang, Ying, E-mail: emulan@163.com [Digestive Endoscopic Center, Shanghai Jiaotong University Affiliated Sixth People’s Hospital (China); Zhou, Hu, E-mail: zhouhu@simm.ac.cn [Department of Analytical Chemistry and CAS Key Laboratory of Receptor Research, Shanghai Institute of Materia Medica, Chinese Academy of Sciences (China)

    2016-08-05

    Colorectal cancer (CRC) is one of the most common types of malignant tumor worldwide. Currently, although many researchers have been devoting themselves in CRC studies, the process of locating biomarkers for CRC early diagnosis and prognostic is still very slow. Using a centrifugal proteomic reactor-based proteomic analysis of minute amount of colonic biopsies by enteroscopy sampling, 2620 protein groups were quantified between cancer mucosa and adjacent normal colorectal mucosa. Of which, 403 protein groups were differentially expressed with statistic significance between cancer and normal tissues, including 195 up-regulated and 208 down-regulated proteins in cancer tissues. Three proteins (SOD3, PRELP and NGAL) were selected for further Western blot validation. And the resulting Western blot experimental results were consistent with the quantitative proteomic data. SOD3 and PRELP are down-regulated in CRC mucosa comparing to adjacent normal tissue, while NGAL is up-regulated in CRC mucosa. In conclusion, the centrifugal proteomic reactor-based label-free quantitative proteomic approach provides a highly sensitive and powerful tool for analyzing minute protein sample from tiny colorectal biopsies, which may facilitate CRC biomarkers discovery for diagnoses and prognoses. -- Highlights: •Minute amount of colonic biopsies by endoscopy is suitable for proteomic analysis. •Centrifugal proteomic reactor can be used for processing tiny clinic biopsy sample. •SOD3 and PRELP are down-regulated in CRC, while NGAL is up-regulated in CRC.

  17. Proteomic analysis of minute amount of colonic biopsies by enteroscopy sampling

    International Nuclear Information System (INIS)

    Liu, Xing; Xu, Yanli; Meng, Qian; Zheng, Qingqing; Wu, Jianhong; Wang, Chen; Jia, Weiping; Figeys, Daniel; Chang, Ying; Zhou, Hu

    2016-01-01

    Colorectal cancer (CRC) is one of the most common types of malignant tumor worldwide. Currently, although many researchers have been devoting themselves in CRC studies, the process of locating biomarkers for CRC early diagnosis and prognostic is still very slow. Using a centrifugal proteomic reactor-based proteomic analysis of minute amount of colonic biopsies by enteroscopy sampling, 2620 protein groups were quantified between cancer mucosa and adjacent normal colorectal mucosa. Of which, 403 protein groups were differentially expressed with statistic significance between cancer and normal tissues, including 195 up-regulated and 208 down-regulated proteins in cancer tissues. Three proteins (SOD3, PRELP and NGAL) were selected for further Western blot validation. And the resulting Western blot experimental results were consistent with the quantitative proteomic data. SOD3 and PRELP are down-regulated in CRC mucosa comparing to adjacent normal tissue, while NGAL is up-regulated in CRC mucosa. In conclusion, the centrifugal proteomic reactor-based label-free quantitative proteomic approach provides a highly sensitive and powerful tool for analyzing minute protein sample from tiny colorectal biopsies, which may facilitate CRC biomarkers discovery for diagnoses and prognoses. -- Highlights: •Minute amount of colonic biopsies by endoscopy is suitable for proteomic analysis. •Centrifugal proteomic reactor can be used for processing tiny clinic biopsy sample. •SOD3 and PRELP are down-regulated in CRC, while NGAL is up-regulated in CRC.

  18. Decontamination experience using the EMMAC process in EDF nuclear power plants

    International Nuclear Information System (INIS)

    Noel, D.; Spychala, H. B.; Dupin, M.; Lantes, B.; Goulain, F.; Gregoire, J.; Jeandrot, S.

    1997-01-01

    The EMMA, EMMAC and EMMAC-PLUS decontamination processes, nondestructive tests and waste treatment are presented. The various applications of the new EMMAC soft decontamination process, used by EDF since 1995 have shown that it is a very effective tool and at the same time, is a very low corrosive process for the materials that have been treated . The improved efficiency, compared to the previous EMMA process allowed us to obtain good decontamination factors with only one cycle instead of two. At the same time, changes in chemical composition and waste treatment produced large reduction in the amount of radioactive wastes generated. Further improvements are still being sought. (authors)

  19. Geoinformation web-system for processing and visualization of large archives of geo-referenced data

    Science.gov (United States)

    Gordov, E. P.; Okladnikov, I. G.; Titov, A. G.; Shulgina, T. M.

    2010-12-01

    Developed working model of information-computational system aimed at scientific research in area of climate change is presented. The system will allow processing and analysis of large archives of geophysical data obtained both from observations and modeling. Accumulated experience of developing information-computational web-systems providing computational processing and visualization of large archives of geo-referenced data was used during the implementation (Gordov et al, 2007; Okladnikov et al, 2008; Titov et al, 2009). Functional capabilities of the system comprise a set of procedures for mathematical and statistical analysis, processing and visualization of data. At present five archives of data are available for processing: 1st and 2nd editions of NCEP/NCAR Reanalysis, ECMWF ERA-40 Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, and NOAA-CIRES XX Century Global Reanalysis Version I. To provide data processing functionality a computational modular kernel and class library providing data access for computational modules were developed. Currently a set of computational modules for climate change indices approved by WMO is available. Also a special module providing visualization of results and writing to Encapsulated Postscript, GeoTIFF and ESRI shape files was developed. As a technological basis for representation of cartographical information in Internet the GeoServer software conforming to OpenGIS standards is used. Integration of GIS-functionality with web-portal software to provide a basis for web-portal’s development as a part of geoinformation web-system is performed. Such geoinformation web-system is a next step in development of applied information-telecommunication systems offering to specialists from various scientific fields unique opportunities of performing reliable analysis of heterogeneous geophysical data using approved computational algorithms. It will allow a wide range of researchers to work with geophysical data without specific programming

  20. Access to gram scale amounts of functional globular adiponectin from E. coli inclusion bodies by alkaline-shock solubilization.

    Science.gov (United States)

    Heiker, John T; Klöting, Nora; Blüher, Matthias; Beck-Sickinger, Annette G

    2010-07-16

    The adipose tissue derived protein adiponectin exerts anti-diabetic, anti-inflammatory and anti-atherosclerotic effects. Adiponectin serum concentrations are in the microgram per milliliter range in healthy humans and inversely correlate with obesity and metabolic disorders. Accordingly, raising circulating adiponectin levels by direct administration may be an intriguing strategy in the treatment of obesity-related metabolic disorders. However production of large amounts of recombinant adiponectin protein is a primary obstacle so far. Here, we report a novel method for large amount production of globular adiponectin from E. coli inclusion bodies utilizing an alkaline-shock solubilization method without chaotropic agents followed by precipitation of the readily renaturing protein. Precipitation of the mildly solubilized protein capitalizes on advantages of inclusion body formation. This approach of inclusion body protein recovery provides access to gram scale amounts of globular adiponectin with standard laboratory equipment avoiding vast dilution or dialysis steps to neutralize the pH and renature the protein, thus saving chemicals and time. The precipitated protein is readily renaturing in buffer, is of adequate purity without a chromatography step and shows biological activity in cultured MCF7 cells and significantly lowered blood glucose levels in mice with streptozotocin induced type 1 diabetes. Copyright 2010 Elsevier Inc. All rights reserved.

  1. Large wood mobility processes in low-order Chilean river channels

    Science.gov (United States)

    Iroumé, Andrés; Mao, Luca; Andreoli, Andrea; Ulloa, Héctor; Ardiles, María Paz

    2015-01-01

    Large wood (LW) mobility was studied over several time periods in channel segments of four low-order mountain streams, southern Chile. All wood pieces found within the bankfull channels and on the streambanks extending into the channel with dimensions more than 10 cm in diameter and 1 m in length were measured and their position was referenced. Thirty six percent of measured wood pieces were tagged to investigate log mobility. All segments were first surveyed in summer and then after consecutive rainy winter periods. Annual LW mobility ranged between 0 and 28%. Eighty-four percent of the moved LW had diameters ≤ 40 cm and 92% had lengths ≤ 7 m. Large wood mobility was higher in periods when maximum water level (Hmax) exceeded channel bankfull depth (HBk) than in periods with flows less than HBk, but the difference was not statistically significant. Dimensions of moved LW showed no significant differences between periods with flows exceeding and with flows less than bankfull stage. Statistically significant relationships were found between annual LW mobility (%) and unit stream power (for Hmax) and Hmax/HBk. The mean diameter of transported wood pieces per period was significantly correlated with unit stream power for H15% and H50% (the level above which the flow remains for 15 and 50% of the time, respectively). These results contribute to an understanding of the complexity of LW mobilization processes in mountain streams and can be used to assess and prevent potential damage caused by LW mobilization during floods.

  2. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  3. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Science.gov (United States)

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  4. Nuclear operator. Liability amounts and financial security limits

    International Nuclear Information System (INIS)

    2015-07-01

    This paper gives, for numerous countries involved (or would be involved) in nuclear activities, financial information on the liability amount imposed on the operator, the amounts provided from public funds beyond the Operator's Liability Amount, to be made available by the State in whose territory the nuclear installation of the liable operator is situated, and the public funds contributed jointly by all the States parties to the BSC or CSC according to a pre-determined formula

  5. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Science.gov (United States)

    Minaudo, Camille; Curie, Florence; Jullian, Yann; Gassama, Nathalie; Moatar, Florentina

    2018-04-01

    To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET) was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P) availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  6. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Directory of Open Access Journals (Sweden)

    C. Minaudo

    2018-04-01

    Full Text Available To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  7. In-database processing of a large collection of remote sensing data: applications and implementation

    Science.gov (United States)

    Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina

    2016-04-01

    Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability

  8. Determination of micro amounts of samarium and europium by analogue derivative spectrophotometry

    International Nuclear Information System (INIS)

    Ishii, H.; Satoh, K.

    1982-01-01

    Derivative spectrophotometry using the analogue differentiation circuit was applied to the determination of samarium and europium at ppm levels. By measuring the second or the fourth derivative spectra of the characteristic absorption bands of both the rare earth ions around 400 nm, they can be determined directly and selectively in the presence of large amounts of most other rare earths without any prior separation. Further, aptly selecting conditions for the measurement of the derivative spectra, the simultaneous determination of both the rare earth elements was feasible. The principle and the characteristics of analogue derivative spectrophotometry are also described. (orig.) [de

  9. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  10. Cryogenic and radiation hard ASIC design for large format NIR/SWIR detector

    Science.gov (United States)

    Gao, Peng; Dupont, Benoit; Dierickx, Bart; Müller, Eric; Verbruggen, Geert; Gielis, Stijn; Valvekens, Ramses

    2014-10-01

    An ASIC is developed to control and data quantization for large format NIR/SWIR detector arrays. Both cryogenic and space radiation environment issue are considered during the design. Therefore it can be integrated in the cryogenic chamber, which reduces significantly the vast amount of long wires going in and out the cryogenic chamber, i.e. benefits EMI and noise concerns, as well as the power consumption of cooling system and interfacing circuits. In this paper, we will describe the development of this prototype ASIC for image sensor driving and signal processing as well as the testing in both room and cryogenic temperature.

  11. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  12. Enrichment and determination of small amounts of 90Sr/90Y in water samples

    International Nuclear Information System (INIS)

    Mundschenk, H.

    1979-01-01

    Small amounts of 90 Sr/ 90 Y can be concentrated from large volumes of surface water (100 l) by precipitation of the phosphates, using bentonite as adsorber matrix. In the case of samples containing no or nearly no suspended matter (tap water, ground water, sea water), the daughter 90 Y can be extracted directly by using filter beds impregnated with HDEHP. The applicability of both techniques is demonstrated under realistic conditions. (orig.) 891 HP/orig. 892 MKO [de

  13. Large 3D resistivity and induced polarization acquisition using the Fullwaver system: towards an adapted processing methodology

    Science.gov (United States)

    Truffert, Catherine; Leite, Orlando; Gance, Julien; Texier, Benoît; Bernard, Jean

    2017-04-01

    Driven by needs in the mineral exploration market for ever faster and ever easier set-up of large 3D resistivity and induced polarization, autonomous and cableless recorded systems come to the forefront. Opposite to the traditional centralized acquisition, this new system permits a complete random distribution of receivers on the survey area allowing to obtain a real 3D imaging. This work presents the results of a 3 km2 large experiment up to 600m of depth performed with a new type of autonomous distributed receivers: the I&V-Fullwaver. With such system, all usual drawbacks induced by long cable set up over large 3D areas - time consuming, lack of accessibility, heavy weight, electromagnetic induction, etc. - disappear. The V-Fullwavers record the entire time series of voltage on two perpendicular axes, for a good determination of the data quality although I-Fullwaver records injected current simultaneously. For this survey, despite good assessment of each individual signal quality, on each channel of the set of Fullwaver systems, a significant number of negative apparent resistivity and chargeability remains present in the dataset (around 15%). These values are commonly not taken into account in the inversion software although they may be due to complex geological structure of interest (e.g. linked to the presence of sulfides in the earth). Taking into account that such distributed recording system aims to restitute the best 3D resistivity and IP tomography, how can 3D inversion be improved? In this work, we present the dataset, the processing chain and quality control of a large 3D survey. We show that the quality of the data selected is good enough to include it into the inversion processing. We propose a second way of processing based on the modulus of the apparent resistivity that stabilizes the inversion. We then discuss the results of both processing. We conclude that an effort could be made on the inclusion of negative apparent resistivity in the inversion

  14. The effect of amount and tangibility of endowment and certainty of recipients on selfishness in a modified dictator game.

    Science.gov (United States)

    Chang, Shao-Chuan; Lin, Li-Yun; Horng, Ruey-Yun; Wang, Yau-De

    2014-06-01

    Taiwanese college students (N = 101) participated in the study to examine the effects of the amount of an endowment, the tangibility of an endowment, and the certainty of the recipient on selfishness in a modified dictator game. Results showed that dictators were more selfish when allocating tangible (money) than less tangible (honor credits) endowments. Selfishness was higher when large amounts of money were involved. The certainty of the recipient was manipulated by whether the recipient was chosen and announced before or after the decision. Unexpectedly, participants were more self-interested in the certain-recipient condition than in the uncertain-recipient condition. In the honor condition, the amount of an endowment and the certainty of the recipient did not affect participants' allocations.

  15. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  16. Automated data processing of high-resolution mass spectra

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Smedsgaard, Jørn

    of the massive amounts of data. We present an automated data processing method to quantitatively compare large numbers of spectra from the analysis of complex mixtures, exploiting the full quality of high-resolution mass spectra. By projecting all detected ions - within defined intervals on both the time...... infusion of crude extracts into the source taking advantage of the high sensitivity, high mass resolution and accuracy and the limited fragmentation. Unfortunately, there has not been a comparable development in the data processing techniques to fully exploit gain in high resolution and accuracy...... infusion analyses of crude extract to find the relationship between species from several species terverticillate Penicillium, and also that the ions responsible for the segregation can be identified. Furthermore the process can automate the process of detecting unique species and unique metabolites....

  17. 46 CFR 308.303 - Amounts insured under interim binder.

    Science.gov (United States)

    2010-10-01

    ... 308.303 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE Second Seamen's War Risk Insurance § 308.303 Amounts insured under interim binder. The amounts insured are the amounts specified in the Second Seamen's War Risk Policy (1955) or as modified by shipping...

  18. An experimental test of the habitat-amount hypothesis for saproxylic beetles in a forested region.

    Science.gov (United States)

    Seibold, Sebastian; Bässler, Claus; Brandl, Roland; Fahrig, Lenore; Förster, Bernhard; Heurich, Marco; Hothorn, Torsten; Scheipl, Fabian; Thorn, Simon; Müller, Jörg

    2017-06-01

    The habitat-amount hypothesis challenges traditional concepts that explain species richness within habitats, such as the habitat-patch hypothesis, where species number is a function of patch size and patch isolation. It posits that effects of patch size and patch isolation are driven by effects of sample area, and thus that the number of species at a site is basically a function of the total habitat amount surrounding this site. We tested the habitat-amount hypothesis for saproxylic beetles and their habitat of dead wood by using an experiment comprising 190 plots with manipulated patch sizes situated in a forested region with a high variation in habitat amount (i.e., density of dead trees in the surrounding landscape). Although dead wood is a spatio-temporally dynamic habitat, saproxylic insects have life cycles shorter than the time needed for habitat turnover and they closely track their resource. Patch size was manipulated by adding various amounts of downed dead wood to the plots (~800 m³ in total); dead trees in the surrounding landscape (~240 km 2 ) were identified using airborne laser scanning (light detection and ranging). Over 3 yr, 477 saproxylic species (101,416 individuals) were recorded. Considering 20-1,000 m radii around the patches, local landscapes were identified as having a radius of 40-120 m. Both patch size and habitat amount in the local landscapes independently affected species numbers without a significant interaction effect, hence refuting the island effect. Species accumulation curves relative to cumulative patch size were not consistent with either the habitat-patch hypothesis or the habitat-amount hypothesis: several small dead-wood patches held more species than a single large patch with an amount of dead wood equal to the sum of that of the small patches. Our results indicate that conservation of saproxylic beetles in forested regions should primarily focus on increasing the overall amount of dead wood without considering its

  19. Study on isothermal precision forging process of rare earth intensifying magnesium alloy

    International Nuclear Information System (INIS)

    Shan, Debin; Xu, Wenchen; Han, Xiuzhu; Huang, Xiaolei

    2012-01-01

    A three dimensional rigid-plastic finite element model is established to simulate the isothermal precision forging process of the magnesium alloy bracket based on DEFORM 3D in order to analyze the material flow rule and determine the forging process scheme. Some problems such as underfilling and too large forging pressure are predicted and resolved through optimizing the shapes of the billet successfully. Compared to the initial microstructure, the isothermal-forged microstructure of the alloy refines obviously and amounts of secondary phases precipitate on the matrix during isothermal forging process. In subsequent ageing process, large quantities of secondary phases precipitate from α-Mg matrix with increasing ageing time. The optimal comprehensive mechanical properties of the alloy have been obtained after aged at 473 K, 63 h with the ultimate tensile strength, tensile yield strength and elongation 380 MPa, 243 MPa and 4.07% respectively, which shows good potential for application of isothermal forging process of rare earth intensifying magnesium alloy.

  20. Limitation on the amount of accessible information in a quantum channel

    International Nuclear Information System (INIS)

    Schumacher, B.; Westmoreland, M.; Wootters, W.K.

    1996-01-01

    We prove a new result limiting the amount of accessible information in a quantum channel. This generalizes Kholevo close-quote s theorem and implies it as a simple corollary. Our proof uses the strong subadditivity of the von Neumann entropy functional S(ρ) and a specific physical analysis of the measurement process. The result presented here has application in information obtained from open-quote open-quote weak close-quote close-quote measurements, such as those sometimes considered in quantum cryptography. copyright 1996 The American Physical Society

  1. Does the amount of school choice matter for student engagement?

    Science.gov (United States)

    Vaughn, Michael G.; Witko, Christopher

    2013-01-01

    School choice may increase student engagement by enabling students to attend schools that more closely match their needs and preferences. But this effect on engagement may depend on the characteristics of the choices available. Therefore, we consider how the amount of educational choice of different types in a local educational marketplace affects student engagement using a large, national population of 8th grade students. We find that more choice of regular public schools in the elementary and middle school years is associated with a lower likelihood that students will be severely disengaged in eighth grade, and more choices of public schools of choice has a similar effect but only in urban areas. In contrast, more private sector choice does not have such a general beneficial effect. PMID:23682202

  2. Processing and properties of large-sized ceramic slabs

    Directory of Open Access Journals (Sweden)

    Fossa, L.

    2010-10-01

    Full Text Available Large-sized ceramic slabs – with dimensions up to 360x120 cm2 and thickness down to 2 mm – are manufactured through an innovative ceramic process, starting from porcelain stoneware formulations and involving wet ball milling, spray drying, die-less slow-rate pressing, a single stage of fast drying-firing, and finishing (trimming, assembling of ceramic-fiberglass composites. Fired and unfired industrial slabs were selected and characterized from the technological, compositional (XRF, XRD and microstructural (SEM viewpoints. Semi-finished products exhibit a remarkable microstructural uniformity and stability in a rather wide window of firing schedules. The phase composition and compact microstructure of fired slabs are very similar to those of porcelain stoneware tiles. The values of water absorption, bulk density, closed porosity, functional performances as well as mechanical and tribological properties conform to the top quality range of porcelain stoneware tiles. However, the large size coupled with low thickness bestow on the slab a certain degree of flexibility, which is emphasized in ceramic-fiberglass composites. These outstanding performances make the large-sized slabs suitable to be used in novel applications: building and construction (new floorings without dismantling the previous paving, ventilated façades, tunnel coverings, insulating panelling, indoor furnitures (table tops, doors, support for photovoltaic ceramic panels.

    Se han fabricado piezas de gran formato, con dimensiones de hasta 360x120 cm, y menos de 2 mm, de espesor, empleando métodos innovadores de fabricación, partiendo de composiciones de gres porcelánico y utilizando, molienda con bolas por vía húmeda, atomización, prensado a baja velocidad sin boquilla de extrusión, secado y cocción rápido en una sola etapa, y un acabado que incluye la adhesión de fibra de vidrio al soporte cerámico y el rectificado de la pieza final. Se han

  3. Large deviations in stochastic heat-conduction processes provide a gradient-flow structure for heat conduction

    International Nuclear Information System (INIS)

    Peletier, Mark A.; Redig, Frank; Vafayi, Kiamars

    2014-01-01

    We consider three one-dimensional continuous-time Markov processes on a lattice, each of which models the conduction of heat: the family of Brownian Energy Processes with parameter m (BEP(m)), a Generalized Brownian Energy Process, and the Kipnis-Marchioro-Presutti (KMP) process. The hydrodynamic limit of each of these three processes is a parabolic equation, the linear heat equation in the case of the BEP(m) and the KMP, and a nonlinear heat equation for the Generalized Brownian Energy Process with parameter a (GBEP(a)). We prove the hydrodynamic limit rigorously for the BEP(m), and give a formal derivation for the GBEP(a). We then formally derive the pathwise large-deviation rate functional for the empirical measure of the three processes. These rate functionals imply gradient-flow structures for the limiting linear and nonlinear heat equations. We contrast these gradient-flow structures with those for processes describing the diffusion of mass, most importantly the class of Wasserstein gradient-flow systems. The linear and nonlinear heat-equation gradient-flow structures are each driven by entropy terms of the form −log ρ; they involve dissipation or mobility terms of order ρ 2 for the linear heat equation, and a nonlinear function of ρ for the nonlinear heat equation

  4. Modeling metabolic response to changes of enzyme amount in ...

    African Journals Online (AJOL)

    Based on the work of Hynne et al. (2001), in an in silico model of glycolysis, Saccharomyces cerevisiae is established by introducing an enzyme amount multiple factor (.) into the kinetic equations. The model is aimed to predict the metabolic response to the change of enzyme amount. With the help of .α, the amounts of ...

  5. 29 CFR 4.142 - Contracts in an indefinite amount.

    Science.gov (United States)

    2010-07-01

    ... McNamara-O'Hara Service Contract Act Determining Amount of Contract § 4.142 Contracts in an indefinite amount. (a) Every contract subject to this Act which is indefinite in amount is required to contain the....), a case arising under the Walsh-Healey Public Contracts Act. Such a contract, which may be in the...

  6. Large wood recruitment processes and transported volumes in Swiss mountain streams during the extreme flood of August 2005

    Science.gov (United States)

    Steeb, Nicolas; Rickenmann, Dieter; Badoux, Alexandre; Rickli, Christian; Waldner, Peter

    2017-02-01

    The extreme flood event that occurred in August 2005 was the most costly (documented) natural hazard event in the history of Switzerland. The flood was accompanied by the mobilization of > 69,000 m3 of large wood (LW) throughout the affected area. As recognized afterward, wood played an important role in exacerbating the damages, mainly because of log jams at bridges and weirs. The present study aimed at assessing the risk posed by wood in various catchments by investigating the amount and spatial variability of recruited and transported LW. Data regarding LW quantities were obtained by field surveys, remote sensing techniques (LiDAR), and GIS analysis and was subsequently translated into a conceptual model of wood transport mass balance. Detailed wood budgets and transport diagrams were established for four study catchments of Swiss mountain streams, showing the spatial variability of LW recruitment and deposition. Despite some uncertainties with regard to parameter assumptions, the sum of reconstructed wood input and observed deposition volumes agree reasonably well. Mass wasting such as landslides and debris flows were the dominant recruitment processes in headwater streams. In contrast, LW recruitment from lateral bank erosion became significant in the lower part of mountain streams where the catchment reached a size of about 100 km2. According to our analysis, 88% of the reconstructed total wood input was fresh, i.e., coming from living trees that were recruited from adjacent areas during the event. This implies an average deadwood contribution of 12%, most of which was estimated to have been in-channel deadwood entrained during the flood event.

  7. On the possibility of the multiple inductively coupled plasma and helicon plasma sources for large-area processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young [Low-temperature Plasma Laboratory, Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); An, Sang-Hyuk [Agency of Defense Development, Yuseong-gu, Daejeon 305-151 (Korea, Republic of)

    2014-08-15

    In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources due to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.

  8. Manufacturing Process Simulation of Large-Scale Cryotanks

    Science.gov (United States)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  9. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  10. 20 CFR 418.1305 - What is not an initial determination regarding your income-related monthly adjustment amount?

    Science.gov (United States)

    2010-04-01

    ... Amount Determinations and the Administrative Review Process § 418.1305 What is not an initial... process as provided by §§ 418.1320 through 418.1325 and §§ 418.1340 through 418.1355, and they are not subject to judicial review. These actions include, but are not limited to, our dismissal of a request for...

  11. Trace amount analysis using spark mass spectrometry

    International Nuclear Information System (INIS)

    Stefani, Rene

    1975-01-01

    Characteristics of spark mass spectrometers (ion source, properties of the ion beam, ion optics, and performance) and their use in qualitative and quantitative analysis are described. This technique is very interesting for the semi-quantitative analysis of trace amounts, down to 10 -8 atoms. Examples of applications such as the analysis of high purity materials and non-conducting mineral samples, and determination of carbon and gas trace amounts are presented. (50 references) [fr

  12. Text mining from ontology learning to automated text processing applications

    CERN Document Server

    Biemann, Chris

    2014-01-01

    This book comprises a set of articles that specify the methodology of text mining, describe the creation of lexical resources in the framework of text mining and use text mining for various tasks in natural language processing (NLP). The analysis of large amounts of textual data is a prerequisite to build lexical resources such as dictionaries and ontologies and also has direct applications in automated text processing in fields such as history, healthcare and mobile applications, just to name a few. This volume gives an update in terms of the recent gains in text mining methods and reflects

  13. Data on the impact of increasing the W amount on the mass density and compressive properties of Ni-W alloys processed by spark plasma sintering.

    Science.gov (United States)

    Sadat, T; Hocini, A; Lilensten, L; Faurie, D; Tingaud, D; Dirras, G

    2016-06-01

    Bulk Ni-W alloys having composite-like microstructures are processed by spark plasma sintering (SPS) route of Ni and W powder blends as reported in a recent study of Sadat et al. (2016) (DOI of original article: doi:10.1016/j.matdes.2015.10.083) [1]. The present dataset deals with determination of mass density and evaluation of room temperature compressive mechanical properties as function of the amount of W (%wt. basis). The presented data concern: (i) measurement of the mass of each investigated Ni-W alloy which is subsequently used to compute the mass density of the alloy and (ii) the raw (stress (MPa) and strain ([Formula: see text])) data, which can be subsequently used for stress/ strain plots.

  14. Nonlinear adaptive synchronization rule for identification of a large amount of parameters in dynamical models

    International Nuclear Information System (INIS)

    Ma Huanfei; Lin Wei

    2009-01-01

    The existing adaptive synchronization technique based on the stability theory and invariance principle of dynamical systems, though theoretically proved to be valid for parameters identification in specific models, is always showing slow convergence rate and even failed in practice when the number of parameters becomes large. Here, for parameters update, a novel nonlinear adaptive rule is proposed to accelerate the rate. Its feasibility is validated by analytical arguments as well as by specific parameters identification in the Lotka-Volterra model with multiple species. Two adjustable factors in this rule influence the identification accuracy, which means that a proper choice of these factors leads to an optimal performance of this rule. In addition, a feasible method for avoiding the occurrence of the approximate linear dependence among terms with parameters on the synchronized manifold is also proposed.

  15. Neutral processes forming large clones during colonization of new areas.

    Science.gov (United States)

    Rafajlović, M; Kleinhans, D; Gulliksson, C; Fries, J; Johansson, D; Ardehed, A; Sundqvist, L; Pereyra, R T; Mehlig, B; Jonsson, P R; Johannesson, K

    2017-08-01

    In species reproducing both sexually and asexually clones are often more common in recently established populations. Earlier studies have suggested that this pattern arises due to natural selection favouring generally or locally successful genotypes in new environments. Alternatively, as we show here, this pattern may result from neutral processes during species' range expansions. We model a dioecious species expanding into a new area in which all individuals are capable of both sexual and asexual reproduction, and all individuals have equal survival rates and dispersal distances. Even under conditions that favour sexual recruitment in the long run, colonization starts with an asexual wave. After colonization is completed, a sexual wave erodes clonal dominance. If individuals reproduce more than one season, and with only local dispersal, a few large clones typically dominate for thousands of reproductive seasons. Adding occasional long-distance dispersal, more dominant clones emerge, but they persist for a shorter period of time. The general mechanism involved is simple: edge effects at the expansion front favour asexual (uniparental) recruitment where potential mates are rare. Specifically, our model shows that neutral processes (with respect to genotype fitness) during the population expansion, such as random dispersal and demographic stochasticity, produce genotype patterns that differ from the patterns arising in a selection model. The comparison with empirical data from a post-glacially established seaweed species (Fucus radicans) shows that in this case, a neutral mechanism is strongly supported. © 2017 The Authors. Journal of Evolutionary Biology Published by John Wiley & Sons ltd on Behalf of European Society for Evolutionary Biology.

  16. Corrosion of Pipeline and Wellbore Steel by Liquid CO2 Containing Trace Amounts of Water and SO2

    Science.gov (United States)

    McGrail, P.; Schaef, H. T.; Owen, A. T.

    2009-12-01

    Carbon dioxide capture and storage in deep saline formations is currently considered the most attractive option to reduce greenhouse gas emissions with continued use of fossil fuels for energy production. Transporting captured CO2 and injection into suitable formations for storage will necessarily involve pipeline systems and wellbores constructed of carbon steels. Industry standards currently require nearly complete dehydration of liquid CO2 to reduce corrosion in the pipeline transport system. However, it may be possible to establish a corrosion threshold based on H2O content in the CO2 that could allow for minor amounts of H2O to remain in the liquid CO2 and thereby eliminate a costly dehydration step. Similarly, trace amounts of sulfur and nitrogen compounds common in flue gas streams are currently removed through expensive desulfurization and catalytic reduction processes. Provided these contaminants could be safely and permanently transported and stored in the geologic reservoir, retrofits of existing fossil-fuel plants could address comprehensive emissions reductions, including CO2 at perhaps nearly the same capital and operating cost. Because CO2-SO2 mixtures have never been commercially transported or injected, both experimental and theoretical work is needed to understand corrosion mechanisms of various steels in these gas mixtures containing varying amounts of water. Experiments were conducted with common tool steel (AISI-01) and pipeline steel (X65) immersed in liquid CO2 at room temperature containing ~1% SO2 and varying amounts of H2O (0 to 2500 ppmw). A threshold concentration of H2O in the liquid CO2-SO2 mixture was established based on the absence of visible surface corrosion. For example, experiments exposing steel to liquid CO2-SO2 containing ~300 ppmw H2O showed a delay in onset of visible corrosion products and minimal surface corrosion was visible after five days of testing. However increasing the water content to 760 ppmw produced extensive

  17. Development of Integrated Die Casting Process for Large Thin-Wall Magnesium Applications

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Jon T. [General Motors LLC, Warren, MI (United States); Wang, Gerry [Meridian Lightweight Technologies, Plymouth MI (United States); Luo, Alan [General Motors LLC, Warren, MI (United States)

    2017-11-29

    The purpose of this project was to develop a process and product which would utilize magnesium die casting and result in energy savings when compared to the baseline steel product. The specific product chosen was a side door inner panel for a mid-size car. The scope of the project included: re-design of major structural parts of the door, design and build of the tooling required to make the parts, making of parts, assembly of doors, and testing (both physical and simulation) of doors. Additional work was done on alloy development, vacuum die casting, and overcasting, all in order to improve the performance of the doors and reduce cost. The project achieved the following objectives: 1. Demonstrated ability to design a large thin-wall magnesium die casting. 2. Demonstrated ability to manufacture a large thin-wall magnesium die casting in AM60 alloy. 3. Tested via simulations and/or physical tests the mechanical behavior and corrosion behavior of magnesium die castings and/or lightweight experimental automotive side doors which incorporate a large, thin-wall, powder coated, magnesium die casting. Under some load cases, the results revealed cracking of the casting, which can be addressed with re-design and better material models for CAE analysis. No corrosion of the magnesium panel was observed. 4. Using life cycle analysis models, compared the energy consumption and global warming potential of the lightweight door with those of a conventional steel door, both during manufacture and in service. Compared to a steel door, the lightweight door requires more energy to manufacture but less energy during operation (i.e., fuel consumption when driving vehicle). Similarly, compared to a steel door, the lightweight door has higher global warming potential (GWP) during manufacture, but lower GWP during operation. 5. Compared the conventional magnesium die casting process with the “super-vacuum” die casting process. Results achieved with cast tensile bars suggest some

  18. 28 CFR 70.73 - Collection of amounts due.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Collection of amounts due. 70.73 Section 70.73 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) UNIFORM ADMINISTRATIVE REQUIREMENTS... OTHER NON-PROFIT ORGANIZATIONS After-the-Award Requirements § 70.73 Collection of amounts due. (a) Any...

  19. Large scale disposal of waste sulfur: From sulfide fuels to sulfate sequestration

    International Nuclear Information System (INIS)

    Rappold, T.A.; Lackner, K.S.

    2010-01-01

    Petroleum industries produce more byproduct sulfur than the market can absorb. As a consequence, most sulfur mines around the world have closed down, large stocks of yellow sulfur have piled up near remote operations, and growing amounts of toxic H 2 S are disposed of in the subsurface. Unless sulfur demand drastically increases or thorough disposal practices are developed, byproduct sulfur will persist as a chemical waste problem on the scale of 10 7 tons per year. We review industrial practices, salient sulfur chemistry, and the geochemical cycle to develop sulfur management concepts at the appropriate scale. We contend that the environmentally responsible disposal of sulfur would involve conversion to sulfuric acid followed by chemical neutralization with equivalent amounts of base, which common alkaline rocks can supply cheaply. The resulting sulfate salts are benign and suitable for brine injection underground or release to the ocean, where they would cause minimal disturbance to ecosystems. Sequestration costs can be recouped by taking advantage of the fuel-grade thermal energy released in the process of oxidizing reduced compounds and sequestering the products. Sulfate sequestration can eliminate stockpiles and avert the proliferation of enriched H 2 S stores underground while providing plenty of carbon-free energy to hydrocarbon processing.

  20. 13 CFR 120.930 - Amount.

    Science.gov (United States)

    2010-01-01

    ... percent of total Project cost plus 100 percent of eligible administrative costs. For good cause shown, SBA... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Amount. 120.930 Section 120.930 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Development Company Loan...

  1. Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics

    Science.gov (United States)

    Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal

    2017-12-01

    Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.

  2. Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente

    2015-01-01

    The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015. PMID:26633395

  3. Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems.

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente

    2015-11-30

    The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015.

  4. Risk-based Strategy to Determine Testing Requirement for the Removal of Residual Process Reagents as Process-related Impurities in Bioprocesses.

    Science.gov (United States)

    Qiu, Jinshu; Li, Kim; Miller, Karen; Raghani, Anil

    2015-01-01

    The purpose of this article is to recommend a risk-based strategy for determining clearance testing requirements of the process reagents used in manufacturing biopharmaceutical products. The strategy takes account of four risk factors. Firstly, the process reagents are classified into two categories according to their safety profile and history of use: generally recognized as safe (GRAS) and potential safety concern (PSC) reagents. The clearance testing of GRAS reagents can be eliminated because of their safe use historically and process capability to remove these reagents. An estimated safety margin (Se) value, a ratio of the exposure limit to the estimated maximum reagent amount, is then used to evaluate the necessity for testing the PSC reagents at an early development stage. The Se value is calculated from two risk factors, the starting PSC reagent amount per maximum product dose (Me), and the exposure limit (Le). A worst-case scenario is assumed to estimate the Me value, that is common. The PSC reagent of interest is co-purified with the product and no clearance occurs throughout the entire purification process. No clearance testing is required for this PSC reagent if its Se value is ≥1; otherwise clearance testing is needed. Finally, the point of the process reagent introduction to the process is also considered in determining the necessity of the clearance testing for process reagents. How to use the measured safety margin as a criterion for determining PSC reagent testing at process characterization, process validation, and commercial production stages are also described. A large number of process reagents are used in the biopharmaceutical manufacturing to control the process performance. Clearance testing for all of the process reagents will be an enormous analytical task. In this article, a risk-based strategy is described to eliminate unnecessary clearance testing for majority of the process reagents using four risk factors. The risk factors included

  5. 22 CFR 226.73 - Collection of amounts due.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Collection of amounts due. 226.73 Section 226.73 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS After-the-Award Requirements § 226.73 Collection of amounts due. (a...

  6. Further development of technology for liquid waste processing

    International Nuclear Information System (INIS)

    Hashimoto, Shoji

    1998-01-01

    Passing through of radiation causes chemical and physical changes in materials. These effects of radiation are able to be utilized for decomposition of organic compounds, precipitation of suspended small particles. Thus, clarification of waste water using radiation has been investigated. This report summarizes the principle, the studies and the trend to practical use of waste water processing with radiation. Generally, γ-ray from 60 Co and electron beam from electron accelerator are usable for water treatment. The penetrating power of electron beam is smaller than that of γ-ray, but the former is more suitable for the processing of a large amount of waste water since an electron accelerator with large power is usable now. Utilization of radiation has been examined for degradation of organic compounds with toxicity, sterilization and inactivation of pathological microbials and viruses, and reactivation of used active carbon and radiation was found applicable to all such purposes. (M.N.)

  7. Abnormal binding and disruption in large scale networks involved in human partial seizures

    Directory of Open Access Journals (Sweden)

    Bartolomei Fabrice

    2013-12-01

    Full Text Available There is a marked increase in the amount of electrophysiological and neuroimaging works dealing with the study of large scale brain connectivity in the epileptic brain. Our view of the epileptogenic process in the brain has largely evolved over the last twenty years from the historical concept of “epileptic focus” to a more complex description of “Epileptogenic networks” involved in the genesis and “propagation” of epileptic activities. In particular, a large number of studies have been dedicated to the analysis of intracerebral EEG signals to characterize the dynamic of interactions between brain areas during temporal lobe seizures. These studies have reported that large scale functional connectivity is dramatically altered during seizures, particularly during temporal lobe seizure genesis and development. Dramatic changes in neural synchrony provoked by epileptic rhythms are also responsible for the production of ictal symptoms or changes in patient’s behaviour such as automatisms, emotional changes or consciousness alteration. Beside these studies dedicated to seizures, large-scale network connectivity during the interictal state has also been investigated not only to define biomarkers of epileptogenicity but also to better understand the cognitive impairments observed between seizures.

  8. High-energy, large-momentum-transfer processes: Ladder diagrams in φ3 theory. Pt. 1

    International Nuclear Information System (INIS)

    Osland, P.; Wu, T.T.; Harvard Univ., Cambridge, MA

    1987-01-01

    Relativistic quantum field theories may give us useful guidance to understanding high-energy, large-momentum-transfer processes, where the center-of-mass energy is much larger than the transverse momentum transfers, which are in turn much larger than the masses of the participating particles. With this possibility in mind, we study the ladder diagrams in φ 3 theory. In this paper, some of the necessary techniques are developed and applied to the simplest cases of the fourth- and sixth-order ladder diagrams. (orig.)

  9. Concrete Waste Recycling Process for High Quality Aggregate

    International Nuclear Information System (INIS)

    Ishikura, Takeshi; Fujii, Shin-ichi

    2008-01-01

    Large amount of concrete waste generates during nuclear power plant (NPP) dismantling. Non-contaminated concrete waste is assumed to be disposed in a landfill site, but that will not be the solution especially in the future, because of decreasing tendency of the site availability and natural resources. Concerning concrete recycling, demand for roadbeds and backfill tends to be less than the amount of dismantled concrete generated in a single rural site, and conventional recycled aggregate is limited of its use to non-structural concrete, because of its inferior quality to ordinary natural aggregate. Therefore, it is vital to develop high quality recycled aggregate for general uses of dismantled concrete. If recycled aggregate is available for high structural concrete, the dismantling concrete is recyclable as aggregate for industry including nuclear field. Authors developed techniques on high quality aggregate reclamation for large amount of concrete generated during NPP decommissioning. Concrete of NPP buildings has good features for recycling aggregate; large quantity of high quality aggregate from same origin, record keeping of the aggregate origin, and little impurities in dismantled concrete such as wood and plastics. The target of recycled aggregate in this development is to meet the quality criteria for NPP concrete as prescribed in JASS 5N 'Specification for Nuclear Power Facility Reinforced Concrete' and JASS 5 'Specification for Reinforced Concrete Work'. The target of recycled aggregate concrete is to be comparable performance with ordinary aggregate concrete. The high quality recycled aggregate production techniques are assumed to apply for recycling for large amount of non-contaminated concrete. These techniques can also be applied for slightly contaminated concrete dismantled from radiological control area (RCA), together with free release survey. In conclusion: a technology on dismantled concrete recycling for high quality aggregate was developed

  10. Using GRACE to constrain precipitation amount over cold mountainous basins

    Science.gov (United States)

    Behrangi, Ali; Gardner, Alex S.; Reager, John T.; Fisher, Joshua B.

    2017-01-01

    Despite the importance for hydrology and climate-change studies, current quantitative knowledge on the amount and distribution of precipitation in mountainous and high-elevation regions is limited due to instrumental and retrieval shortcomings. Here by focusing on two large endorheic basins in High Mountain Asia, we show that satellite gravimetry (Gravity Recovery and Climate Experiment (GRACE)) can be used to provide an independent estimate of monthly accumulated precipitation using mass balance equation. Results showed that the GRACE-based precipitation estimate has the highest agreement with most of the commonly used precipitation products in summer, but it deviates from them in cold months, when the other products are expected to have larger errors. It was found that most of the products capture about or less than 50% of the total precipitation estimated using GRACE in winter. Overall, Global Precipitation Climatology Project (GPCP) showed better agreement with GRACE estimate than other products. Yet on average GRACE showed 30% more annual precipitation than GPCP in the study basins. In basins of appropriate size with an absence of dense ground measurements, as is a typical case in cold mountainous regions, we find GRACE can be a viable alternative to constrain monthly and seasonal precipitation estimates from other remotely sensed precipitation products that show large bias.

  11. Plasma analysis of different TiN PVD processes at various process parameters

    International Nuclear Information System (INIS)

    Strauss, G.N.; Schlichtherle, S.; Pulker, H.K.; Meyer, M.; Jehn, H.; Balzer, M.; Misiano, C.; Silipo, V.

    2002-01-01

    TiN coatings of some microns in thickness were deposited by different reactive plasma deposition technologies (Magnetron Sputtering Magnetically Assisted, Arc Source Ion Plating, Sputter Ion Plating Plasma Assisted) on various metal parts. The experiments were carried out in specially designed plants under variable vacuum and plasma conditions. The plasma properties of the different processes were investigated by mass spectrometry and the energy distribution of process relevant particles was additionally determined. The aim of this work was to find proper processes and conditions for a reliable low cost deposition of hard coatings at relatively high gas pressures. It was found that the magnetically forced and medium frequency pulsed biased dc magnetron sputter deposition variants, operating in the 10 -3 mbar gas pressure range, showed a relatively large amount of single and double charged positive ions with kinetic energies up to 55 and 95 eV, as consequence of the applied modifications. Cathodic arc deposition, in the same gas pressure range of 10 - 3 mbar, showed a very high number of such ions with energies up to more than 100 eV, depending on the value of the applied arc current. However, at constant distance between source and substrate the higher gas pressure increases also the number of energy reducing collisions of the coating-material vapour-species with the gas molecules. The arc source process, even when performed at high gas pressures of about 10 -1 mbar, showed a remarkable amount of ions with energies up to 75 eV resulting in high performance TiN films of quite proper 3D homogeneity. The arc source technique is able to increase film thickness uniformity up to 3 times with respect to the traditional coatings if the samples are mounted in a way that they do not influence each other. (nevyjel)

  12. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    Science.gov (United States)

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  13. Summer Decay Processes in a Large Tabular Iceberg

    Science.gov (United States)

    Wadhams, P.; Wagner, T. M.; Bates, R.

    2012-12-01

    Summer Decay Processes in a Large Tabular Iceberg Peter Wadhams (1), Till J W Wagner(1) and Richard Bates(2) (1) Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK (2) Scottish Oceans Institute, School of Geography and Geosciences, University of St Andrews, St. Andrews, Scotland KY16 9AL We present observational results from an experiment carried out during July-August 2012 on a giant grounded tabular iceberg off Baffin Island. The iceberg studied was part of the Petermann Ice Island B1 (PIIB1) which calved off the Petermann Glacier in NW Greenland in 2010. Since 2011 it has been aground in 100 m of water on the Baffin Island shelf at 69 deg 06'N, 66 deg 06'W. As part of the project a set of high resolution GPS sensors and tiltmeters was placed on the ice island to record rigid body motion as well as flexural responses to wind, waves, current and tidal forces, while a Waverider buoy monitored incident waves and swell. On July 31, 2012 a major breakup event was recorded, with a piece of 25,000 sq m surface area calving off the iceberg. At the time of breakup, GPS sensors were collecting data both on the main berg as well as on the newly calved piece, while two of us (PW and TJWW) were standing on the broken-out portion which rose by 0.6 m to achieve a new isostatic equilibrium. Crucially, there was no significant swell at the time of breakup, which suggests a melt-driven decay process rather than wave-driven flexural break-up. The GPS sensors recorded two disturbances during the hour preceding the breakup, indicative of crack growth and propagation. Qualitative observation during the two weeks in which our research ship was moored to, or was close to, the ice island edge indicates that an important mechanism for summer ablation is successive collapses of the overburden from above an unsupported wave cut, which creates a submerged ram fringing the berg. A model of buoyancy stresses induced by

  14. Benchmarking processes for managing large international space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.; Duke, Michael B.

    1993-01-01

    The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.

  15. Possible chaoticity for the time series of the amount of nuclear information released by the newsmedia

    International Nuclear Information System (INIS)

    Ohnishi, T.

    1995-01-01

    The amount of information concerning nuclear problems was time analysed which has been released by three types of the newsmedia in Japan, the press, television and magazines during the past 20 years. The time series of the logarithmic value of the amount released by some of the newsmedia was found to be possibly chaotic, or at least to be non-stochastic. Such a characteristic of time series can be interpreted as a result of the exertion of a certain sort of selection process in the interior of the newsmedia in deciding an event as news to be released. (author)

  16. Assembling large, complex environmental metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Howe, A. C. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Jansson, J. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Malfatti, S. A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tringe, S. G. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tiedje, J. M. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Brown, C. T. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Computer Science and Engineering

    2012-12-28

    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more computationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.

  17. 5 CFR 870.704 - Amount of Option A.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Amount of Option A. 870.704 Section 870... of Option A. (a) The amount of Option A coverage an annuitant or compensationer can continue is $10,000. (b) An annuitant's or compensationer's Option A coverage reduces by 2 percent of the original...

  18. 46 CFR 282.20 - Amount of subsidy payable.

    Science.gov (United States)

    2010-10-01

    ... Rates. Daily ODS rates shall be used to quantify the amount of ODS payable. The daily ODS rate... items is the daily amount of ODS payable for approved vessel operating days, excluding reduced crew... the daily wage ODS rate to conform to the complement remaining on the vessel. The man-day reduction...

  19. Drell–Yan process at Large Hadron Collider

    Indian Academy of Sciences (India)

    Drell–Yan process at LHC, q q ¯ → /* → ℓ+ ℓ-, is one of the benchmarks for confirmation of Standard Model at TeV energy scale. Since the theoretical prediction for the rate is precise and the final state is clean as well as relatively easy to measure, the process can be studied at the LHC even at relatively low luminosity.

  20. A rapid and practical strategy for the determination of platinum, palladium, ruthenium, rhodium, iridium and gold in large amounts of ultrabasic rock by inductively coupled plasma optical emission spectrometry combined with ultrasound extraction

    Science.gov (United States)

    Zhang, Gai; Tian, Min

    2015-04-01

    This proposed method regulated the determination of platinum, palladium, ruthenium, rhodium, iridium and gold in platinum-group ores by nickel sulfide fire assay—inductively coupled plasma optical emission spectrometry (ICP-OES) combined with ultrasound extraction for the first time. The quantitative limits were 0.013-0.023μg/g. The samples were fused to separate the platinum-group elements from matrix. The nickel sulfide button was then dissolved with hydrochloric acid and the insoluble platinum-group sulfide residue was dissolved with aqua regia by ultrasound bath and finally determined by ICP-OES. The proposed method has been applied into the determination of platinum-group element and gold in large amounts of ultrabasic rocks from the Great Dyke of Zimbabwe.

  1. Grain refinement of Aluminium alloys using friction stir processing

    International Nuclear Information System (INIS)

    Khraisheh, M.

    2004-01-01

    Full text.Friction Stir Processing (FSP) is a new advanced material processing technique used to refine and homogenize the microstructure of sheet metals. FSP is a solid state processing technique that uses a rapidly rotating non-consumable high strength tool steel pin that extends from a cylindrical shoulder. The rotating pin is forced with a predetermined load into the work piece and moved along with the work pieces, while the rotating pin deforms and stirs the locally heated material. It is a hot working process in which a large amount of deformation is imparted to the sheet. FS processed zone is characterized by dynamic recrystallization which results in grain refinement . this promising emerging process needs further investigations to develop optimum process parameters to produce the desired microstructure. In this work, we present preliminary results on the effects of rotational and translational speeds on grain refinement of AA5052. Under certain processing conditions, sub-micron grain structure was produced using this technique

  2. Preserving Medieval Farm Mounds in a Large Stormwater Retention Area

    NARCIS (Netherlands)

    Vorenhout, M.

    2016-01-01

    The Netherlands has denoted large areas as stormwater retention areas. These areas function as temporary storage locations for stormwater when rivers cannot cope with the amount of water. A large area, the Onlanden — 2,500 hectares — was developed as such a storage area between 2008 and 2013. This

  3. 45 CFR 1225.11 - Amount of attorney fees.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Amount of attorney fees. 1225.11 Section 1225.11... § 1225.11 Amount of attorney fees. (a) When a decision of the agency provides for an award of attorney's fees or costs, the complainant's attorney shall submit a verified statement of costs and attorney's...

  4. Innovation Processes in Large-Scale Public Foodservice-Case Findings from the Implementation of Organic Foods in a Danish County

    DEFF Research Database (Denmark)

    Mikkelsen, Bent Egberg; Nielsen, Thorkild; Kristensen, Niels Heine

    2005-01-01

    is the idea that the large-scale foodservice such as hospital food service should adopt a buy organic policy due to their large buying volume. But whereas implementation of organic foods has developed quite unproblematically in smaller institutions such as kindergartens and nurseries, introduction of organic...... foods into large-scale foodservice such as that taking place in hospitals and larger homes for the elderly, has proven to be quite difficult. The very complex planning, procurement and processing procedures used in such facilities are among reasons for this. Against this background an evaluation...

  5. A review of Vendor Managed Inventory (VMI): from concept to processes

    OpenAIRE

    Marquès , Guillaume; Thierry , Caroline; Lamothe , Jacques; Gourc , Didier

    2011-01-01

    International audience; In the modern supplier-customer relationship, Vendor Managed Inventory (VMI) is used to monitor the customer's inventory replenishment. Despite the large amount of literature on the subject, it is difficult to clearly define VMI and the main associated processes. Beyond the short-term pull system inventory replenishment often studied in academic works, partners have to share their vision of the demand, their requirements and their constraints in order to fix shared obj...

  6. Design of an RF Antenna for a Large-Bore, High Power, Steady State Plasma Processing Chamber for Material Separation

    International Nuclear Information System (INIS)

    Rasmussen, D.A.; Freeman, R.L.

    2001-01-01

    The purpose of this Cooperative Research and Development Agreement (CRADA) between UT-Battelle, LLC, (Contractor), and Archimedes Technology Group, (Participant) is to evaluate the design of an RF antenna for a large-bore, high power, steady state plasma processing chamber for material separation. Criteria for optimization will be to maximize the power deposition in the plasma while operating at acceptable voltages and currents in the antenna structure. The project objectives are to evaluate the design of an RF antenna for a large-bore, high power, steady state plasma processing chamber for material separation. Criteria for optimization will be to maximize the power deposition in the plasma while operating at acceptable voltages and currents in the antenna structure

  7. Research on the drawing process with a large total deformation wires of AZ31 alloy

    International Nuclear Information System (INIS)

    Bajor, T; Muskalski, Z; Suliga, M

    2010-01-01

    Magnesium and their alloys have been extensively studied in recent years, not only because of their potential applications as light-weight engineering materials, but also owing to their biodegradability. Due to their hexagonal close-packed crystallographic structure, cold plastic processing of magnesium alloys is difficult. The preliminary researches carried out by the authors have indicated that the application of the KOBO method, based on the effect of cyclic strain path change, for the deformation of magnesium alloys, provides the possibility of obtaining a fine-grained structure material to be used for further cold plastic processing with large total deformation. The main purpose of this work is to present research findings concerning a detailed analysis of mechanical properties and changes occurring in the structure of AZ31 alloy wire during the multistage cold drawing process. The appropriate selection of drawing parameters and the application of multistep heat treatment operations enable the deformation of the AZ31 alloy in the cold drawing process with a total draft of about 90%.

  8. The big data processing platform for intelligent agriculture

    Science.gov (United States)

    Huang, Jintao; Zhang, Lichen

    2017-08-01

    Big data technology is another popular technology after the Internet of Things and cloud computing. Big data is widely used in many fields such as social platform, e-commerce, and financial analysis and so on. Intelligent agriculture in the course of the operation will produce large amounts of data of complex structure, fully mining the value of these data for the development of agriculture will be very meaningful. This paper proposes an intelligent data processing platform based on Storm and Cassandra to realize the storage and management of big data of intelligent agriculture.

  9. Improving the Phosphoproteome Coverage for Limited Sample Amounts Using TiOsub>2sub>-SIMAC-HILIC (TiSH) Phosphopeptide Enrichment and Fractionation

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin R

    2016-01-01

    spectrometry (LC-MS/MS) analysis. Due to the sample loss resulting from fractionation, this procedure is mainly performed when large quantities of sample are available. To make large-scale phosphoproteomics applicable to smaller amounts of protein we have recently combined highly specific TiO2-based...... protocol we describe the procedure step by step to allow for comprehensive coverage of the phosphoproteome utilizing only a few hundred micrograms of protein....

  10. 39 CFR 601.111 - Interest on claim amounts.

    Science.gov (United States)

    2010-07-01

    ... that date is later, until the date of payment. Simple interest will be paid at the rate established by... 39 Postal Service 1 2010-07-01 2010-07-01 false Interest on claim amounts. 601.111 Section 601.111... PROPERTY RIGHTS OTHER THAN PATENTS PURCHASING OF PROPERTY AND SERVICES § 601.111 Interest on claim amounts...

  11. Improved processes of molybdenum-99 production

    International Nuclear Information System (INIS)

    Dadachova, K.; La Riviere, K.; Anderon, P.

    1997-01-01

    Two improved processes of Molybdenum-99 production have been developed at ANSTO on laboratory scale. The first one allows to purify Mo of natural isotopic composition from tungsten impurities by using preferential adsorption of tungsten on hydrated tin(IV) oxide SnO 2 x nH 2 O before irradiation in the nuclear reactor. Mo-99 obtained via this route can be used for production of i nstant Tc-99m. As the starting material MoO 3 contains considerable amounts of tungsten impurity (W > 60 ppm), 5-7 days irradiation results in generation of W-188 in amounts sufficient to contaminate the final Tc-99m product with rhenium-188 (Re-188, 16.8 h half-life) - radioactive daughter of W-188. To overcome this problem, a method of MoO 3 purification from W, based on preferential adsorption of W by hydrated tin (IV) oxide has been developed. The contents of W in MoO 3 purified by this technique became 3 and retaining of Mo-99 on a large alumina column. Mo-99 is stripped off the column with 200 mL 1M NH 4 OH followed by loading this solution onto the AG 1x8 column. The next steps are different for each version of separation process

  12. Quenches in large superconducting magnets

    International Nuclear Information System (INIS)

    Eberhard, P.H.; Alston-Garnjost, M.; Green, M.A.; Lecomte, P.; Smits, R.G.; Taylor, J.D.; Vuillemin, V.

    1977-08-01

    The development of large high current density superconducting magnets requires an understanding of the quench process by which the magnet goes normal. A theory which describes the quench process in large superconducting magnets is presented and compared with experimental measurements. The use of a quench theory to improve the design of large high current density superconducting magnets is discussed

  13. Processing and utilization of metallurgical slag

    Directory of Open Access Journals (Sweden)

    Alena Pribulová

    2016-06-01

    Full Text Available Metallurgy and foundry industry create a huge amount of slags that are by-products in production of pig iron, steel and cast iron. Slag is produced in a very large amount in pyrometallurgical processes, and is a huge source of waste if not properly recycled and utilized. With rapid growth of industrialization, land available for land-filling of large quantity of metallurgical slag is being reduced all over the world and disposal cost is becoming increasingly higher. Metallurgical slag from different metallurgical processes treated and utilized in different ways based on different slag characteristics. The most economic and efficient option for reducing metallurgical waste is through recycling, which is a significant contribution to saving natural resources and reducing CO2 emissions. Characteristic of slags as well as its treatment and utilization are given in the paper. Slag from pig iron and steel production is used most frequently in building industry. From experiments using blast furnace slag and granulated blast furnace slag as gravel, and water glass as binder it can be concluded that that the best results – the best values of compression strength and tensile strength were reached by using of 18% of water glass as a solidification activating agent. According to cubic compression strength, mixture from 50% blast furnace gravel, 50% granulated blast furnace slag and 18% water glass falls into C35/45 class of concrete. Such concrete also fulfils strength requirements for road concrete, moreover, it even exceeds them considerably and, therefore, it can find an application in construction of road communications or in production of concrete slabs.

  14. Different Amounts of DNA in Newborn Cells of Escherichia coli Preclude a Role for the Chromosome in Size Control According to the "Adder" Model.

    Science.gov (United States)

    Huls, Peter G; Vischer, Norbert O E; Woldringh, Conrad L

    2018-01-01

    According to the recently-revived adder model for cell size control, newborn cells of Escherichia coli will grow and divide after having added a constant size or length, ΔL , irrespective of their size at birth. Assuming exponential elongation, this implies that large newborns will divide earlier than small ones. The molecular basis for the constant size increment is still unknown. As DNA replication and cell growth are coordinated, the constant ΔL could be based on duplication of an equal amount of DNA, ΔG , present in newborn cells. To test this idea, we measured amounts of DNA and lengths of nucleoids in DAPI-stained cells growing in batch culture at slow and fast rates. Deeply-constricted cells were divided in two subpopulations of longer and shorter lengths than average; these were considered to represent large and small prospective daughter cells, respectively. While at slow growth, large and small prospective daughter cells contained similar amounts of DNA, fast growing cells with multiforked replicating chromosomes, showed a significantly higher amount of DNA (20%) in the larger cells. This observation precludes the hypothesis that Δ L is based on the synthesis of a constant ΔG . Growth curves were constructed for siblings generated by asymmetric division and growing according to the adder model. Under the assumption that all cells at the same growth rate exhibit the same time between initiation of DNA replication and cell division (i.e., constant C+D -period), the constructions predict that initiation occurs at different sizes ( Li ) and that, at fast growth, large newborn cells transiently contain more DNA than small newborns, in accordance with the observations. Because the state of segregation, measured as the distance between separated nucleoids, was found to be more advanced in larger deeply-constricted cells, we propose that in larger newborns nucleoid separation occurs faster and at a shorter length, allowing them to divide earlier. We propose

  15. 26 CFR 1.6655-4 - Large corporations.

    Science.gov (United States)

    2010-04-01

    ... Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6655-4 Large... not to be taken into account and the taxable income of a corporation for any taxable year during the...

  16. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  17. Consultancy on Large-Scale Submerged Aerobic Cultivation Process Design - Final Technical Report: February 1, 2016 -- June 30, 2016

    Energy Technology Data Exchange (ETDEWEB)

    Crater, Jason [Gemomatica, Inc., San Diego, CA (United States); Galleher, Connor [Gemomatica, Inc., San Diego, CA (United States); Lievense, Jeff [Gemomatica, Inc., San Diego, CA (United States)

    2017-05-12

    NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integrated black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.

  18. Nuclear liability amounts on the rise for nuclear installations

    International Nuclear Information System (INIS)

    Vasquez-Maignan, Ximena; Schwartz, Julia; Kuzeyli, Kaan

    2015-01-01

    The NEA Table on Nuclear Operator Liability Amounts and Financial Security Limits (NEA 'Liability Table'), which covers 71 countries, aims to provide one of the most comprehensive listings of nuclear liability amounts and financial security limits. The current and revised Paris and Brussels Supplementary Conventions ('Paris-Brussels regime'), the original and revised Vienna Conventions ('Vienna regime') and the Convention on Supplementary Compensation for Nuclear Damage, newly entered into force in April 2015, provide for the minimum amounts to be transposed in the national legislation of states parties to the conventions, and have served as guidelines for non-convention states. This article examine in more detail increases in the liability amounts provided for under these conventions, as well as examples of non-convention states (China, India and Korea)

  19. Extending Practical Pre-Aggregation in On-Line Analytical Processing

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach; Jensen, Christian Søndergaard; Dyreson, Curtis E.

    On-Line Analytical Processing (OLAP) based on a dimensional view of data is being used increasingly in traditional business applications as well as in applications such as health care for the purpose of analyzing very large amounts of data. Pre-aggregation, the prior materialization of aggregate...... select combinations of aggregates and then re-use these for efficiently computing other aggregates. However, this re-use of aggregates is contingent on the dimension hierarchies and the relationships between facts and dimensions satisfying stringent constraints. This severely limits the scope...

  20. Gene Expression Browser: Large-Scale and Cross-Experiment Microarray Data Management, Search & Visualization

    Science.gov (United States)

    The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...

  1. Amount, composition and seasonality of dissolved organic carbon and nitrogen export from agriculture in contrasting climates

    DEFF Research Database (Denmark)

    Graeber, Daniel; Meerhof, Mariana; Zwirnmann, Elke

    2014-01-01

    Agricultural catchments are potentially important but often neglected sources of dissolved organic matter (DOM), of which a large part is dissolved organic carbon (DOC) and nitrogen (DON). DOC is an important source of aquatic microbial respiration and DON may be an important source of nitrogen...... to aquatic ecosystems. However, there is still a lack of comprehensive studies on the amount, composition and seasonality of DOM export from agricultural catchments in different climates. The aim of our study was to assess the amount, composition and seasonality of DOM in a total of four streams in the wet......-temperate and subtropical climate of Denmark and Uruguay, respectively. In each climate, we investigated one stream with extensive agriculture (mostly pasture) and one stream with intensive agriculture (mostly intensively used arable land) in the catchment. We sampled each stream taking grab samples fortnightly for two...

  2. Amount of trace elements in marine cephalopods

    International Nuclear Information System (INIS)

    Ueda, Taiji; Nakahara, Motokazu; Ishii, Toshiaki; Suzuki, Yuzuru; Suzuki, Hamaji.

    1979-01-01

    For the estimation of internal radiation to human beings, the amounts of Mn, Fe, Cu, Zn, Co and Cs in 5 species of marine cephalopods were determined by atomic absorption spectrophotometry and neutron activation analysis, and then the concentration factors were calculated. The average amount and the concentration factor of the elements in the edible parts (mantle, arms and tentacles) of cephalopods are as follows: 0.14 mg, 2 x 10 2 for Mn, 1.8 mg, 2 x 10 2 for Fe, 2.0 mg, 7 x 10 2 for Cu, 12.2 mg, 1 x 10 3 for Zn, 5.3 μg, 6 x 10 1 for Co and 3.4 μg, 7 for Cs. The amounts of Fe, Co, Cu and Zn in the liver and the branchial heart were much higher than those in the edible parts, although those of Cs and Mn were almost the same. The Co content in the visceral organs of O. Vulgaris showed extremely high value, particularly in the branchial heart. (author)

  3. Pressurized Recuperator For Heat Recovery In Industrial High Temperature Processes

    Directory of Open Access Journals (Sweden)

    Gil S.

    2015-09-01

    Full Text Available Recuperators and regenerators are important devices for heat recovery systems in technological lines of industrial processes and should have high air preheating temperature, low flow resistance and a long service life. The use of heat recovery systems is particularly important in high-temperature industrial processes (especially in metallurgy where large amounts of thermal energy are lost to the environment. The article presents the process design for a high efficiency recuperator intended to work at high operating parameters: air pressure up to 1.2 MPa and temperature of heating up to 900°C. The results of thermal and gas-dynamic calculations were based on an algorithm developed for determination of the recuperation process parameters. The proposed technical solution of the recuperator and determined recuperation parameters ensure its operation under maximum temperature conditions.

  4. Process technology for vitrification of defense high-level waste at the Savannah River Plant

    International Nuclear Information System (INIS)

    Boersma, M.D.

    1984-01-01

    Vitrification in borosilicate glass is now the leading worldwide process for immobilizing high-level radioactive waste. Each vitrification project, however, has its unique mission and technical challenges. The Defense Waste Vitrification Facility (DWPF) now under construction at the Savannah River Plant will concentrate and vitrify a large amount of relatively low-power alkaline waste. Process research and development for the DWPF have produced significant advances in remote chemical operations, glass melting, off-gas treatment, slurry handling, decontamination, and welding. 6 references, 1 figure, 5 tables

  5. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  6. 45 CFR 160.404 - Amount of a civil money penalty.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...

  7. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  8. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  9. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  10. The effects of large scale processing on caesium leaching from cemented simulant sodium nitrate waste

    International Nuclear Information System (INIS)

    Lee, D.J.; Brown, D.J.

    1982-01-01

    The effects of large scale processing on the properties of cemented simulant sodium nitrate waste have been investigated. Leach tests have been performed on full-size drums, cores and laboratory samples of cement formulations containing Ordinary Portland Cement (OPC), Sulphate Resisting Portland Cement (SRPC) and a blended cement (90% ground granulated blast furnace slag/10% OPC). In addition, development of the cement hydration exotherms with time and the temperature distribution in 220 dm 3 samples have been followed. (author)

  11. Process Simulation and Characterization of Substrate Engineered Silicon Thin Film Transistor for Display Sensors and Large Area Electronics

    International Nuclear Information System (INIS)

    Hashmi, S M; Ahmed, S

    2013-01-01

    Design, simulation, fabrication and post-process qualification of substrate-engineered Thin Film Transistors (TFTs) are carried out to suggest an alternate manufacturing process step focused on display sensors and large area electronics applications. Damage created by ion implantation of Helium and Silicon ions into single-crystalline n-type silicon substrate provides an alternate route to create an amorphized region responsible for the fabrication of TFT structures with controllable and application-specific output parameters. The post-process qualification of starting material and full-cycle devices using Rutherford Backscattering Spectrometry (RBS) and Proton or Particle induced X-ray Emission (PIXE) techniques also provide an insight to optimize the process protocols as well as their applicability in the manufacturing cycle

  12. 26 CFR 25.2701-3 - Determination of amount of gift.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Determination of amount of gift. 25.2701-3... AND GIFT TAXES GIFT TAX; GIFTS MADE AFTER DECEMBER 31, 1954 Special Valuation Rules § 25.2701-3 Determination of amount of gift. (a) Overview—(1) In general. The amount of the gift resulting from any transfer...

  13. Novel Process Windows for the safe and continuous synthesis of tert.-butyl peroxypivalate with micro process technology

    NARCIS (Netherlands)

    Illg, T.

    2013-01-01

    Based on the economy of scale, the classical chemical industry uses large scale reactors to increase production output and to decrease the average unit costs. This results in large footprint plants consuming land and a huge amount of resources. This large scale character bears certain risks for the

  14. Automated sampling and data processing derived from biomimetic membranes

    DEFF Research Database (Denmark)

    Perry, Mark; Vissing, Thomas; Boesen, P.

    2009-01-01

    data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition...... applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet (TM)) for efficient data management......Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new...

  15. Advantages of thermal processes to reduce the amounts of sludge; Interet des procedes thermiques dans la problematique de la reduction et/ou de l'elimination des boues?

    Energy Technology Data Exchange (ETDEWEB)

    Cretenot, D. [Vivendi Water System, 94 - Saint-Maurice (France); Chauzy, J.; Fernandes, P.; Patria, L. [Anjou Recherche, 78 - Maisons-Laffite (France)

    2003-02-01

    All the actors in the water field have to face the fate of sludge generated by wastewater treatment. The challenge they have to take up is not only on the good quality of the final effluent but also on the by-products treatment performance. Reduction, stabilization and pasteurization are the key-words in the present trend of sludge treatment., There are many technologies that reduce the final volume of sludge by lowering their dry solids concentration, but only the treatment lines with thermal processes can both reduce the amounts of sludge generated, and also issue sludge that answers to more and more stringent constraints on sanitary quality.

  16. Measurement of Electroweak Gauge Boson Scattering in the Channel $pp \\rightarrow W^{\\pm}W^{\\pm}jj$ with the ATLAS Detector at the Large Hadron Collider

    CERN Document Server

    AUTHOR|(CDS)2080413; Kobel, Michael; Heinemann, Beate; Klein, Uta

    Particle physics deals with the elementary constituents of our universe and their interactions. The electroweak symmetry breaking mechanism in the Standard Model of Particle Physics is of paramount importance and it plays a central role in the physics programmes of current high-energy physics experiments at the Large Hadron Collider. The study of scattering processes of massive electroweak gauge bosons provides an approach complementary to the precise measurement of the properties of the recently discovered Higgs boson. Owing to the unprecedented energies achieved in proton-proton collisions at the Large Hadron Collider and the large amount of data collected, experimental studies of these processes become feasible for the first time. Especially the scattering of two $W^{\\pm}$ bosons of identical electric charge is considered a promising process for an initial study due to its distinct experimental signature. In the course of this work, $20.3 \\, \\mathrm{fb}^{−1}$ of proton-proton collision data recorded by t...

  17. The purchase decision process and involvement of the elderly regarding nonprescription products.

    Science.gov (United States)

    Reisenwitz, T H; Wimbish, G J

    1997-01-01

    The elderly or senior citizen is a large and growing market segment that purchases a disproportionate amount of health care products, particularly nonprescription products. This study attempts to examine the elderly's level of involvement (high versus low) and their purchase decision process regarding nonprescription or over-the-counter (OTC) products. Frequencies and percentages are calculated to indicate level of involvement as well as purchase decision behavior. Previous research is critiqued and managerial implications are discussed.

  18. Laser velocimeter data acquisition, processing, and control system

    International Nuclear Information System (INIS)

    Croll, R.H. Jr.; Peterson, C.W.

    1975-01-01

    The use of a mini-computer for data acquisition, processing, and control of a two-velocity-component dual beam laser velocimeter in a low-speed wind tunnel is described in detail. Digital stepping motors were programmed to map the mean-flow and turbulent fluctuating velocities in the test section boundary layer and free stream. The mini-computer interface controlled the operation of the LV processor and the high-speed selection of the photomultiplier tube whose output was to be processed. A statistical analysis of the large amount of data from the LV processor was performed by the computer while the experiment was in progress. The resulting velocities are in good agreement with hot-wire survey data obtained in the same facility

  19. Distributed computing strategies for processing of FT-ICR MS imaging datasets for continuous mode data visualization

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco; Kilic, Mehmet; Heeren, Ronald M.

    2015-03-01

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, but requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.

  20. Large trees drive forest aboveground biomass variation in moist lowland forests across the tropics

    NARCIS (Netherlands)

    Slik, J.W.F.; Paoli, G.; McGuire, K.; Amaral, I.; Barroso, J.; Bongers, F.; Poorter, L.

    2013-01-01

    Aim - Large trees (d.b.h.¿=¿70¿cm) store large amounts of biomass. Several studies suggest that large trees may be vulnerable to changing climate, potentially leading to declining forest biomass storage. Here we determine the importance of large trees for tropical forest biomass storage and explore

  1. Survey of high-voltage pulse technology suitable for large-scale plasma source ion implantation processes

    International Nuclear Information System (INIS)

    Reass, W.A.

    1994-01-01

    Many new plasma processes ideas are finding their way from the research lab to the manufacturing plant floor. These require high voltage (HV) pulse power equipment, which must be optimized for application, system efficiency, and reliability. Although no single HV pulse technology is suitable for all plasma processes, various classes of high voltage pulsers may offer a greater versatility and economy to the manufacturer. Technology developed for existing radar and particle accelerator modulator power systems can be utilized to develop a modern large scale plasma source ion implantation (PSII) system. The HV pulse networks can be broadly defined by two classes of systems, those that generate the voltage directly, and those that use some type of pulse forming network and step-up transformer. This article will examine these HV pulse technologies and discuss their applicability to the specific PSII process. Typical systems that will be reviewed will include high power solid state, hard tube systems such as crossed-field ''hollow beam'' switch tubes and planar tetrodes, and ''soft'' tube systems with crossatrons and thyratrons. Results will be tabulated and suggestions provided for a particular PSII process

  2. A progress report for the large block test of the coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, W.; Wilder, D.G.; Blink, J.

    1994-10-01

    This is a progress report on the Large Block Test (LBT) project. The purpose of the LBT is to study some of the coupled thermal-mechanical-hydrological-chemical (TMHC) processes in the near field of a nuclear waste repository under controlled boundary conditions. To do so, a large block of Topopah Spring tuff will be heated from within for about 4 to 6 months, then cooled down for about the same duration. Instruments to measure temperature, moisture content, stress, displacement, and chemical changes will be installed in three directions in the block. Meanwhile, laboratory tests will be conducted on small blocks to investigate individual thermal-mechanical, thermal-hydrological, and thermal-chemical processes. The fractures in the large block will be characterized from five exposed surfaces. The minerals on fracture surfaces will be studied before and after the test. The results from the LBT will be useful for testing and building confidence in models that will be used to predict TMHC processes in a repository. The boundary conditions to be controlled on the block include zero moisture flux and zero heat flux on the sides, constant temperature on the top, and constant stress on the outside surfaces of the block. To control these boundary conditions, a load-retaining frame is required. A 3 x 3 x 4.5 m block of Topopah Spring tuff has been isolated on the outcrop at Fran Ridge, Nevada Test Site. Pre-test model calculations indicate that a permeability of at least 10 -15 m 2 is required so that a dryout zone can be created within a practical time frame when the block is heated from within. Neutron logging was conducted in some of the vertical holes to estimate the initial moisture content of the block. It was found that about 60 to 80% of the pore volume of the block is saturated with water. Cores from the vertical holes have been used to map the fractures and to determine the properties of the rock. A current schedule is included in the report

  3. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    Science.gov (United States)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  4. Statistical processing of large image sequences.

    Science.gov (United States)

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  5. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-art and Challenges

    Directory of Open Access Journals (Sweden)

    Mufti eMahmud

    2016-06-01

    Full Text Available In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dysfunctions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed. This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semiautomated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc., and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are to be faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data.

  6. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  7. Automation of Survey Data Processing, Documentation and Dissemination: An Application to Large-Scale Self-Reported Educational Survey.

    Science.gov (United States)

    Shim, Eunjae; Shim, Minsuk K.; Felner, Robert D.

    Automation of the survey process has proved successful in many industries, yet it is still underused in educational research. This is largely due to the facts (1) that number crunching is usually carried out using software that was developed before information technology existed, and (2) that the educational research is to a great extent trapped…

  8. 33 CFR 135.203 - Amount required.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...

  9. Developing a semi/automated protocol to post-process large volume, High-resolution airborne thermal infrared (TIR) imagery for urban waste heat mapping

    Science.gov (United States)

    Rahman, Mir Mustafizur

    In collaboration with The City of Calgary 2011 Sustainability Direction and as part of the HEAT (Heat Energy Assessment Technologies) project, the focus of this research is to develop a semi/automated 'protocol' to post-process large volumes of high-resolution (H-res) airborne thermal infrared (TIR) imagery to enable accurate urban waste heat mapping. HEAT is a free GeoWeb service, designed to help Calgary residents improve their home energy efficiency by visualizing the amount and location of waste heat leaving their homes and communities, as easily as clicking on their house in Google Maps. HEAT metrics are derived from 43 flight lines of TABI-1800 (Thermal Airborne Broadband Imager) data acquired on May 13--14, 2012 at night (11:00 pm--5:00 am) over The City of Calgary, Alberta (˜825 km 2) at a 50 cm spatial resolution and 0.05°C thermal resolution. At present, the only way to generate a large area, high-spatial resolution TIR scene is to acquire separate airborne flight lines and mosaic them together. However, the ambient sensed temperature within, and between flight lines naturally changes during acquisition (due to varying atmospheric and local micro-climate conditions), resulting in mosaicked images with different temperatures for the same scene components (e.g. roads, buildings), and mosaic join-lines arbitrarily bisect many thousands of homes. In combination these effects result in reduced utility and classification accuracy including, poorly defined HEAT Metrics, inaccurate hotspot detection and raw imagery that are difficult to interpret. In an effort to minimize these effects, three new semi/automated post-processing algorithms (the protocol) are described, which are then used to generate a 43 flight line mosaic of TABI-1800 data from which accurate Calgary waste heat maps and HEAT metrics can be generated. These algorithms (presented as four peer-reviewed papers)---are: (a) Thermal Urban Road Normalization (TURN)---used to mitigate the microclimatic

  10. Distributed Processing of SETI Data

    Science.gov (United States)

    Korpela, Eric

    As you have read in prior chapters, researchers have been performing progressively more sensitive SETI searches since 1960. Each search has been limited by the technologies available at the time. As radio frequency technologies have become more efficient and computers have become faster, the searches have increased in capacity and become more sensitive. Often the limits of the hardware that performs the calculations required to process the telescope data in order to expose any embedded signals is what limits the sensitivity of the search. Shortly before the start of the 21st century, projects began to appear that exploited the processing capabilities of computers connected to the Internet in order to solve problems that required a large amount of computing power. The SETI@home project, managed by myself and a group of researchers at the Space Sciences Laboratory of the University of California, Berkeley, was the first attempt to use large-scale distributed computing to solve the problems of performing a sensitive search for narrow band radio signals from extraterrestrial civilizations. (Korpela et al., 2001) A follow-on project, Astropulse, searches for extraterrestrial signals with wider bandwidths and shorter time durations. Both projects are ongoing at the present time (mid-2010).

  11. Kinetic determination of ultramicro amounts of As(III in solution

    Directory of Open Access Journals (Sweden)

    RANGEL P. IGOV

    2003-02-01

    Full Text Available A new catalytic reaction is proposed and a kinetic method developed for the determination of ultramicro amounts of As(III on the basis of its catalytic activity in the oxidation of ethylenediamine-N,N’-diacetic-N,N’ dipropionic acid (EAP by KMnO4 in the presence of hydrochloric acid. Under optimal conditions, the sensivity of the method is 20 ng/cm3. The probable relative error is 7.6 – 14.5 % for the concentration range 50 – 200 ng/cm3 As(III. The effect of certain foreign ions upon the reaction rate were determined for the assessment of the selectivity of the method. The method has relatively good selectivity. Kinetic equations were proposed for the investigated process.

  12. A new method for wafer quality monitoring using semiconductor process big data

    Science.gov (United States)

    Sohn, Younghoon; Lee, Hyun; Yang, Yusin; Jun, Chungsam

    2017-03-01

    In this paper we proposed a new semiconductor quality monitoring methodology - Process Sensor Log Analysis (PSLA) - using process sensor data for the detection of wafer defectivity and quality monitoring. We developed exclusive key parameter selection algorithm and user friendly system which is able to handle large amount of big data very effectively. Several production wafers were selected and analyzed based on the risk analysis of process driven defects, for example alignment quality of process layers. Thickness of spin-coated material can be measured using PSLA without conventional metrology process. In addition, chip yield impact was verified by matching key parameter changes with electrical die sort (EDS) fail maps at the end of the production step. From this work, we were able to determine that process robustness and product yields could be improved by monitoring the key factors in the process big data.

  13. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  14. 41 CFR 301-71.307 - How do we collect the amount of a travel advance in excess of the amount of travel expenses...

    Science.gov (United States)

    2010-07-01

    ... amount of a travel advance in excess of the amount of travel expenses substantiated by the employee? 301-71.307 Section 301-71.307 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES AGENCY RESPONSIBILITIES 71-AGENCY TRAVEL ACCOUNTABILITY REQUIREMENTS...

  15. Large-scale deposition of weathered oil in the Gulf of Mexico following a deep-water oil spill.

    Science.gov (United States)

    Romero, Isabel C; Toro-Farmer, Gerardo; Diercks, Arne-R; Schwing, Patrick; Muller-Karger, Frank; Murawski, Steven; Hollander, David J

    2017-09-01

    The blowout of the Deepwater Horizon (DWH) drilling rig in 2010 released an unprecedented amount of oil at depth (1,500 m) into the Gulf of Mexico (GoM). Sedimentary geochemical data from an extensive area (∼194,000 km 2 ) was used to characterize the amount, chemical signature, distribution, and extent of the DWH oil deposited on the seafloor in 2010-2011 from coastal to deep-sea areas in the GoM. The analysis of numerous hydrocarbon compounds (N = 158) and sediment cores (N = 2,613) suggests that, 1.9 ± 0.9 × 10 4 metric tons of hydrocarbons (>C9 saturated and aromatic fractions) were deposited in 56% of the studied area, containing 21± 10% (up to 47%) of the total amount of oil discharged and not recovered from the DWH spill. Examination of the spatial trends and chemical diagnostic ratios indicate large deposition of weathered DWH oil in coastal and deep-sea areas and negligible deposition on the continental shelf (behaving as a transition zone in the northern GoM). The large-scale analysis of deposited hydrocarbons following the DWH spill helps understanding the possible long-term fate of the released oil in 2010, including sedimentary transformation processes, redistribution of deposited hydrocarbons, and persistence in the environment as recycled petrocarbon. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Large-scale seismic signal analysis with Hadoop

    Science.gov (United States)

    Addair, T. G.; Dodge, D. A.; Walter, W. R.; Ruppert, S. D.

    2014-05-01

    In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. This was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data. But, because IO is very fast, and because the fine-grained computations could be handled extremely quickly by the mappers, the net was a large performance gain.

  17. Software Defined Optics and Networking for Large Scale Data Centers

    DEFF Research Database (Denmark)

    Mehmeri, Victor; Andrus, Bogdan-Mihai; Tafur Monroy, Idelfonso

    Big data imposes correlations of large amounts of information between numerous systems and databases. This leads to large dynamically changing flows and traffic patterns between clusters and server racks that result in a decrease of the quality of transmission and degraded application performance....... Highly interconnected topologies combined with flexible, on demand network configuration can become a solution to the ever-increasing dynamic traffic...

  18. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  19. 20 CFR 362.12 - Computation of amount of reimbursement.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computation of amount of reimbursement. 362.12 Section 362.12 Employees' Benefits RAILROAD RETIREMENT BOARD INTERNAL ADMINISTRATION, POLICY AND... the cost of repair is the amount payable. (b) Depreciation in value of an item of personal property is...

  20. Increasing the amount of payment to research subjects

    Science.gov (United States)

    Resnick, DB

    2014-01-01

    This article discusses some ethical issues that can arise when researchers decide to increase the amount of payment offered to research subjects to boost enrollment. Would increasing the amount of payment be unfair to subjects who have already consented to participate in the study? This article considers how five different models of payment—the free market model, the wage payment model, the reimbursement model, the appreciation model, and the fair benefits model—would approach this issue. The article also considers several practical problems related to changing the amount of payment, including determining whether there is enough money in the budget to offer additional payments to subjects who have already enrolled, ascertaining how difficult it will be to re-contact subjects, and developing a plan of action for responding to subjects who find out they are receiving less money and demand an explanation. PMID:18757614

  1. Research to lessen the amounts of curing agents in processed meat through use of rock salt and carbon monoxide

    Science.gov (United States)

    Sakata, R.; Takeda, S.; Kinoshita, Y.; Waga, M.

    2017-09-01

    This study was carried out to examine the reddening of meat products due to the addition of natural yellow salt (YS) and carbon monoxide (CO). Following YS or NaCl addition at 2% to pork subsequent to nitrite (0∼100 ppm) treatment, color development due to this addition was analyzed visually. Heme pigment content in the meat was also determined spectrophotometrically. YS was found to bring about greater reddening than NaCl, indicating residual nitrite and nitrate content to be significantly higher in meat containing YS, through the amount of either was quite small. The amount of nitrite required for a red color to develop was noted to vary significantly from one meat product to another. CO treatment of pork caused the formation of carboxy myoglobin (COMb) with consequent reddening of the meat. COMb was shown to be heat-stable and form stably at pH 5.0 to ∼8.0 and to be extractable with water, but was barely extractable at all with acetone. Nitric oxide was found to have greater affinity toward myoglobin (Mb) than CO. Nitrosyl Mb was noted to be stable in all meat products examined. CO was seen to be capable of controlling the extent of lipid oxidation.

  2. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    Science.gov (United States)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of

  3. Inverse problem to constrain the controlling parameters of large-scale heat transport processes: The Tiberias Basin example

    Science.gov (United States)

    Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien

    2015-04-01

    Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite

  4. Large-scale preparation of plasmid DNA.

    Science.gov (United States)

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  5. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    Science.gov (United States)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  6. GDC 2: Compression of large collections of genomes.

    Science.gov (United States)

    Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin

    2015-06-25

    The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about.

  7. Fabrication of silica ceramic membrane via sol-gel dip-coating method at different nitric acid amount

    Science.gov (United States)

    Kahlib, N. A. Z.; Daud, F. D. M.; Mel, M.; Hairin, A. L. N.; Azhar, A. Z. A.; Hassan, N. A.

    2018-01-01

    Fabrication of silica ceramics via the sol-gel method has offered more advantages over other methods in the fabrication of ceramic membrane, such as simple operation, high purity homogeneous, well defined-structure and complex shapes of end products. This work presents the fabrication of silica ceramic membrane via sol-gel dip-coating methods by varying nitric acid amount. The nitric acid plays an important role as catalyst in fabrication reaction which involved hydrolysis and condensation process. The tubular ceramic support, used as the substrate, was dipped into the sol of Tetrethylorthosilicate (TEOS), distilled water and ethanol with the addition of nitric acid. The fabricated silica membrane was then characterized by (Field Emission Scanning Electron Microscope) FESEM and (Fourier transform infrared spectroscopy) FTIR to determine structural and chemical properties at different amount of acids. From the XRD analysis, the fabricated silica ceramic membrane showed the existence of silicate hydrate in the final product. FESEM images indicated that the silica ceramic membrane has been deposited on the tubular ceramic support as a substrate and penetrate into the pore walls. The intensity peak of FTIR decreased with increasing of amount of acids. Hence, the 8 ml of acid has demonstrated the appropriate amount of catalyst in fabricating good physical and chemical characteristic of silica ceramic membrane.

  8. The microbial fermentation characteristics depend on both carbohydrate source and heat processing: a model experiment with ileo-cannulated pigs

    DEFF Research Database (Denmark)

    Nielsen, Tina Skau; Jørgensen, Henry Johs. Høgh; Knudsen, Knud Erik Bach

    2017-01-01

    The effects of carbohydrate (CHO) source and processing (extrusion cooking) on large intestinal fermentation products were studied in ileo-cannulated pigs as a model for humans. Pigs were fed diets containing barley, pea or a mixture of potato starch:wheat bran (PSWB) either raw or extrusion cooked....... Extrusion cooking reduced the amount of starch fermented in the large intestine by 52–96% depending on the CHO source and the total pool of butyrate in the distal small intestine + large intestine by on average 60% across diets. Overall, extrusion cooking caused a shift in the composition of short......-chain fatty acids (SCFA) produced towards more acetate and less propionate and butyrate. The CHO source and processing highly affected the fermentation characteristics and extrusion cooking generally reduced large intestinal fermentation and resulted in a less desirable composition of the fermentation...

  9. DB-XES : enabling process discovery in the large

    NARCIS (Netherlands)

    Syamsiyah, A.; van Dongen, B.F.; van der Aalst, W.M.P.; Ceravolo, P.; Guetl, C.; Rinderle-Ma, S.

    2018-01-01

    Dealing with the abundance of event data is one of the main process discovery challenges. Current process discovery techniques are able to efficiently handle imported event log files that fit in the computer’s memory. Once data files get bigger, scalability quickly drops since the speed required to

  10. Radiation entropy influx as a measure of planetary dissipative processes

    International Nuclear Information System (INIS)

    Izakov, M.N.

    1989-01-01

    Dissipative processes including high flows of matter and energy occur at the planets. Radiation negentropy influx, resulting from difference of entropy fluxes of incoming solar and outgoing thermal radiation of the planet, is a measure of all these processes. Large share of radiation negentropy influx is spent in the vertical thermal fluxes which keep the planet temperature conditions. Next share of radiation negentropy consumption at the Earth is water evaporation. It's rest part is used for the dynamics, which is explained by the efficiency insignificant amount of heat engine, which generates movements in the atmosphere and ocean. Essentially higher share of radiation negentropy influx, than at the Earth, is spent at the Venus, where there are practically no water

  11. Waste processing system for nuclear power plant

    International Nuclear Information System (INIS)

    Higashinakagawa, Emiko; Tezuka, Fuminobu; Maesawa, Yukishige; Irie, Hiromitsu; Daibu, Etsuji.

    1996-01-01

    The present invention concerns a waste processing system of a nuclear power plant, which can reduce the volume of a large amount of plastics without burying them. Among burnable wastes and plastic wastes to be discarded in the power plant located on the sea side, the plastic wastes are heated and converted into oils, and the burnable wastes are burnt using the oils as a fuel. The system is based on the finding that the presence of Na 2 O, K 2 O contained in the wastes catalytically improves the efficiency of thermal decomposition in a heating atmosphere, in the method of heating plastics and converting them into oils. (T.M.)

  12. 42 CFR 438.704 - Amounts of civil money penalties.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Amounts of civil money penalties. 438.704 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Sanctions § 438.704 Amounts of civil money penalties. (a) General rule. The limit on, or the maximum civil money penalty the State may impose varies...

  13. Health information search to deal with the exploding amount of health information produced.

    Science.gov (United States)

    Müller, H; Hanbury, A; Al Shorbaji, N

    2012-01-01

    This focus theme deals with the various aspects of health information search that are necessary to cope with the challenges of an increasing amount and complexity of medical information currently produced. This editorial reviews the main challenges of health information search and summarizes the five papers of this focus theme. The five papers of the focus theme cover a large part of the current challenges in health information search such as coding standards, information extraction from complex data, user requirements analysis, multimedia data analysis and the access to big data. Several future challenges are identified such as the combination of visual and textual data for information search and the difficulty to scale when analyzing big data.

  14. A software package to process an INIS magnetic tape on the VAX computer

    International Nuclear Information System (INIS)

    Omar, A.A.; Mohamed, F.A.

    1991-01-01

    This paper presents a software package whose function is to process the magnetic tapes distributed by the Atomic Energy Agency, on the VAX computers. These tapes contain abstracts of papers in the different branches of nuclear field and is supplied from the international Nuclear Information system (INIS). Two goals are aimed from this paper. First it gives a procedure to process any foreign magnetic tape on the VAX computers. Second, it solves the problem of reading the INIS tapes on a non IBM computer and thus allowing the specialists to gain from the large amount of information contained in these tapes. 11 fig

  15. Compliance with Environmental Regulations through Complex Geo-Event Processing

    Directory of Open Access Journals (Sweden)

    Federico Herrera

    2017-11-01

    Full Text Available In a context of e-government, there are usually regulatory compliance requirements that support systems must monitor, control and enforce. These requirements may come from environmental laws and regulations that aim to protect the natural environment and mitigate the effects of pollution on human health and ecosystems. Monitoring compliance with these requirements involves processing a large volume of data from different sources, which is a major challenge. This volume is also increased with data coming from autonomous sensors (e.g. reporting carbon emission in protected areas and from citizens providing information (e.g. illegal dumping in a voluntary way. Complex Event Processing (CEP technologies allow processing large amount of event data and detecting patterns from them. However, they do not provide native support for the geographic dimension of events which is essential for monitoring requirements which apply to specific geographic areas. This paper proposes a geospatial extension for CEP that allows monitoring environmental requirements considering the geographic location of the processed data. We extend an existing platform-independent, model-driven approach for CEP adding the geographic location to events and specifying patterns using geographic operators. The use and technical feasibility of the proposal is shown through the development of a case study and the implementation of a prototype.

  16. The large-scale process of microbial carbonate precipitation for nickel remediation from an industrial soil.

    Science.gov (United States)

    Zhu, Xuejiao; Li, Weila; Zhan, Lu; Huang, Minsheng; Zhang, Qiuzhuo; Achal, Varenyam

    2016-12-01

    Microbial carbonate precipitation is known as an efficient process for the remediation of heavy metals from contaminated soils. In the present study, a urease positive bacterial isolate, identified as Bacillus cereus NS4 through 16S rDNA sequencing, was utilized on a large scale to remove nickel from industrial soil contaminated by the battery industry. The soil was highly contaminated with an initial total nickel concentration of approximately 900 mg kg -1 . The soluble-exchangeable fraction was reduced to 38 mg kg -1 after treatment. The primary objective of metal stabilization was achieved by reducing the bioavailability through immobilizing the nickel in the urease-driven carbonate precipitation. The nickel removal in the soils contributed to the transformation of nickel from mobile species into stable biominerals identified as calcite, vaterite, aragonite and nickelous carbonate when analyzed under XRD. It was proven that during precipitation of calcite, Ni 2+ with an ion radius close to Ca 2+ was incorporated into the CaCO 3 crystal. The biominerals were also characterized by using SEM-EDS to observe the crystal shape and Raman-FTIR spectroscopy to predict responsible bonding during bioremediation with respect to Ni immobilization. The electronic structure and chemical-state information of the detected elements during MICP bioremediation process was studied by XPS. This is the first study in which microbial carbonate precipitation was used for the large-scale remediation of metal-contaminated industrial soil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Decontamination processes for waste glass canisters

    International Nuclear Information System (INIS)

    Rankin, W.N.

    1982-01-01

    A Defense Waste Processing Facility (DWPF) is currently being designed to convert Savannah River Plant liquid, high-level radioactive waste into a solid form, such as borosilicate glass. To prevent the spread of radioactivity, the outside of the canisters of waste glass must have very low levels of smearable radioactive contamination before they are removed from the DWPF. Several techniques were considered for canister decontamination: high-pressure water spray, electropolishing, chemical dissolution, and abrasive blasting. An abrasive blasting technique using a glass frit slurry has been selected for use in the DWPF. No additional equipment is needed to process waste generated from decontamination. Frit used as the abrasive will be mixed with the waste and fed to the glass melter. In contrast, chemical and electrochemical techniques require more space in the DWPF, and produce large amounts of contaminated by-products, which are difficult to immobilize by vitrification

  18. 29 CFR 530.302 - Amounts of civil money penalties.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Amounts of civil money penalties. 530.302 Section 530.302... EMPLOYMENT OF HOMEWORKERS IN CERTAIN INDUSTRIES Civil Money Penalties § 530.302 Amounts of civil money penalties. (a) A civil money penalty, not to exceed $500 per affected homeworker for any one violation, may...

  19. Chinese large solar telescopes site survey

    Science.gov (United States)

    Liu, Yu

    2017-04-01

    In order to observe the solar surface with unprecedentedly higher resolution, Chinse solar physics society decided to launch their solar site survey project in 2010 as the first step to look for the best candidate sites for the Chinese next-generation large-aperture solar telescopes, i.e., the 5-8 meter Chinese Giant Solar Telescope, and the 1 meter level coronagraph. We have built two long-term monitoring sites in Daocheng, with altitudes of around 4800 meters above the sea level located in the large Shangri-La mountain area, and we have collected systematic site data since 2014. Clear evidence, including the key parameters of seeing factor, sky brightness and water vapor content, has indicated that the large Shangri-La area owns the potential conditions of excellent seeing level and sufficient amount of clear-sky hours suitable for developing large solar telescopes. We will review the site survey progress and present the preliminary statistical results in this talk.

  20. Estimation of the radiological consequences of dumping into the athmosphere and upon the surface waters caused by non-nuclear industrial processes in the Netherlands

    International Nuclear Information System (INIS)

    Punte, A.; Meijer, R.J. de; Put, L.W.

    1988-01-01

    The objective of this report is to make an estimation of the radiologic burden of the Dutch people caused by losses into the atmosphere and upon the surface waters by the non-nuclear industry in the Netherlands. All minerals and raw materials contain small quantities of radioactive materials. However, the concentrations in most minerals are small, the total amount of radioactivity can be considerable by using large amount of radioactivity can be considerable by using large amounts of minerals. As result from losses, storage and/or reuse of thhe rest materials liberated in these processes, a large part of the people may be exposed to an extra amount of ionizing radiation. In this report the risks and risk classes are formulated upon which the industrial brances may be subdivided. Therefore an estimation is made of the radionuclide-transport of the raw materials in various industrial branches. Next is is indicated how the amount of the losses from the radionuclide-transport can be estimated and how the limits of the risk classes can be translated into limits in the radionuclide-transport. Finally the risks for members of the critical groups and the general individual risks as result from the estimated losses and the from this resulting doses for distinguished industry branches. (author). 155 refs.; 6 figs.; 32 tabs