WorldWideScience

Sample records for distributed processing acm

  1. Distribution of the ACME-arcA gene among meticillin-resistant Staphylococcus haemolyticus and identification of a novel ccr allotype in ACME-arcA-positive isolates.

    Science.gov (United States)

    Pi, Borui; Yu, Meihong; Chen, Yagang; Yu, Yunsong; Li, Lanjuan

    2009-06-01

    The aim of this study was to investigate the prevalence and characteristics of ACME (arginine catabolic mobile element)-arcA-positive isolates among meticillin-resistant Staphylococcus haemolyticus (MRSH). ACME-arcA, native arcA and SCCmec elements were detected by PCR. Susceptibilities to 10 antimicrobial agents were compared between ACME-arcA-positive and -negative isolates by chi-square test. PFGE was used to investigate the clonal relatedness of ACME-arcA-positive isolates. The phylogenetic relationships of ACME-arcA and native arcA were analysed using the neighbour-joining methods of mega software. A total of 42 (47.7 %) of 88 isolates distributed in 13 PFGE types were positive for the ACME-arcA gene. There were no significant differences in antimicrobial susceptibility between ACME-arcA-positive and -negative isolates. A novel ccr allotype (ccrAB(SHP)) was identified in ACME-arcA-positive isolates. Among 42 ACME-arcA-positive isolates: 8 isolates harboured SCCmec V, 8 isolates harboured class C1 mec complex and ccrAB(SHP); 22 isolates harbouring class C1 mec complex and 4 isolates harbouring class C2 mec complex were negative for all known ccr allotypes. The ACME-arcA-positive isolates were first found in MRSH with high prevalence and clonal diversity, which suggests a mobility of ACME within MRSH. The results from this study revealed that MRSH is likely to be one of the potential reservoirs of ACME for Staphylococcus aureus.

  2. Improving simulated spatial distribution of productivity and biomass in Amazon forests using the ACME land model

    Science.gov (United States)

    Yang, X.; Thornton, P. E.; Ricciuto, D. M.; Shi, X.; Xu, M.; Hoffman, F. M.; Norby, R. J.

    2017-12-01

    Tropical forests play a crucial role in the global carbon cycle, accounting for one third of the global NPP and containing about 25% of global vegetation biomass and soil carbon. This is particularly true for tropical forests in the Amazon region, as it comprises approximately 50% of the world's tropical forests. It is therefore important for us to understand and represent the processes that determine the fluxes and storage of carbon in these forests. In this study, we show that the implementation of phosphorus (P) cycle and P limitation in the ACME Land Model (ALM) improves simulated spatial pattern of NPP. The P-enabled ALM is able to capture the west-to-east gradient of productivity, consistent with field observations. We also show that by improving the representation of mortality processes, ALM is able to reproduce the observed spatial pattern of above ground biomass across the Amazon region.

  3. HPDC ´12 : proceedings of the 21st ACM symposium on high-performance parallel and distributed computing, June 18-22, 2012, Delft, The Netherlands

    NARCIS (Netherlands)

    Epema, D.H.J.; Kielmann, T.; Ripeanu, M.

    2012-01-01

    Welcome to ACM HPDC 2012! This is the twenty-first year of HPDC and we are pleased to report that our community continues to grow in size, quality and reputation. The program consists of three days packed with presentations on the latest developments in high-performance parallel and distributed

  4. ACME-III and ACME-IV Final Campaign Reports

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    The goals of the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s third and fourth Airborne Carbon Measurements (ACME) field campaigns, ACME-III and ACME-IV, are: 1) to measure and model the exchange of CO2, water vapor, and other greenhouse gases by the natural, agricultural, and industrial ecosystems of the Southern Great Plains (SGP) region; 2) to develop quantitative approaches to relate these local fluxes to the concentration of greenhouse gases measured at the Central Facility tower and in the atmospheric column above the ARM SGP Central Facility, 3) to develop and test bottom-up measurement and modeling approaches to estimate regional scale carbon balances, and 4) to develop and test inverse modeling approaches to estimate regional scale carbon balance and anthropogenic sources over continental regions. Regular soundings of the atmosphere from near the surface into the mid-troposphere are essential for this research.

  5. ACMS-Data

    Data.gov (United States)

    Department of Homeland Security — The Records of CBP training activities in the academies and in-service field training. This data is for processing by COTS Application Acadis Readiness Suite and is...

  6. Process evaluation distributed system

    Science.gov (United States)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  7. ACM Bundles on Del Pezzo surfaces

    Directory of Open Access Journals (Sweden)

    Joan Pons-Llopis

    2009-11-01

    Full Text Available ACM rank 1 bundles on del Pezzo surfaces are classified in terms of the rational normal curves that they contain. A complete list of ACM line bundles is provided. Moreover, for any del Pezzo surface X of degree less or equal than six and for any n ≥ 2 we construct a family of dimension ≥ n − 1 of non-isomorphic simple ACM bundles of rank n on X.

  8. Asbestos-Containing Materials (ACM) and Demolition

    Science.gov (United States)

    There are specific federal regulatory requirements that require the identification of asbestos-containing materials (ACM) in many of the residential buildings that are being demolished or renovated by a municipality.

  9. ACME: A Basis for Architecture Exchange

    National Research Council Canada - National Science Library

    Wile, David

    2003-01-01

    .... It remains useful in that role, but since the project's inception the Acme language and its support toolkit have grown into a solid foundation upon which new software architecture design and analysis...

  10. Distributed genetic process mining

    NARCIS (Netherlands)

    Bratosin, C.C.; Sidorova, N.; Aalst, van der W.M.P.

    2010-01-01

    Process mining aims at discovering process models from data logs in order to offer insight into the real use of information systems. Most of the existing process mining algorithms fail to discover complex constructs or have problems dealing with noise and infrequent behavior. The genetic process

  11. Quark ACM with topologically generated gluon mass

    Science.gov (United States)

    Choudhury, Ishita Dutta; Lahiri, Amitabha

    2016-03-01

    We investigate the effect of a small, gauge-invariant mass of the gluon on the anomalous chromomagnetic moment (ACM) of quarks by perturbative calculations at one-loop level. The mass of the gluon is taken to have been generated via a topological mass generation mechanism, in which the gluon acquires a mass through its interaction with an antisymmetric tensor field Bμν. For a small gluon mass ( ACM at momentum transfer q2 = -M Z2. We compare those with the ACM calculated for the gluon mass arising from a Proca mass term. We find that the ACM of up, down, strange and charm quarks vary significantly with the gluon mass, while the ACM of top and bottom quarks show negligible gluon mass dependence. The mechanism of gluon mass generation is most important for the strange quarks ACM, but not so much for the other quarks. We also show the results at q2 = -m t2. We find that the dependence on gluon mass at q2 = -m t2 is much less than at q2 = -M Z2 for all quarks.

  12. Additive Construction with Mobile Emplacement (ACME)

    Science.gov (United States)

    Vickers, John

    2015-01-01

    The Additive Construction with Mobile Emplacement (ACME) project is developing technology to build structures on planetary surfaces using in-situ resources. The project focuses on the construction of both 2D (landing pads, roads, and structure foundations) and 3D (habitats, garages, radiation shelters, and other structures) infrastructure needs for planetary surface missions. The ACME project seeks to raise the Technology Readiness Level (TRL) of two components needed for planetary surface habitation and exploration: 3D additive construction (e.g., contour crafting), and excavation and handling technologies (to effectively and continuously produce in-situ feedstock). Additionally, the ACME project supports the research and development of new materials for planetary surface construction, with the goal of reducing the amount of material to be launched from Earth.

  13. Safe Distribution of Declarative Processes

    DEFF Research Database (Denmark)

    Hildebrandt, Thomas; Mukkamala, Raghava Rao; Slaats, Tijs

    2011-01-01

    of projections that covers a DCR Graph that the network of synchronously communicating DCR Graphs given by the projections is bisimilar to the original global process graph. We exemplify the distribution technique on a process identified in a case study of an cross-organizational case management system carried...... process model generalizing labelled prime event structures to a systems model able to finitely represent ω-regular languages. An operational semantics given as a transition semantics between markings of the graph allows DCR Graphs to be conveniently used as both specification and execution model....... The technique for distribution is based on a new general notion of projection of DCR Graphs relative to a subset of labels and events identifying the set of external events that must be communicated from the other processes in the network in order for the distribution to be safe.We prove that for any vector...

  14. Porcine bladder acellular matrix (ACM): protein expression, mechanical properties

    Energy Technology Data Exchange (ETDEWEB)

    Farhat, Walid A [Department of Surgery, Division of Urology, University of Toronto and Hospital for Sick Children, Toronto, ON M5G 1X8 (Canada); Chen Jun; Haig, Jennifer; Antoon, Roula; Litman, Jessica; Yeger, Herman [Department of Developmental and Stem Cell Biology, Research Institute, Hospital for Sick Children, Toronto, ON M5G 1X8 (Canada); Sherman, Christopher [Department of Anatomic Pathology, Sunnybrook and Women' s College Health Sciences Centre, Toronto, ON (Canada); Derwin, Kathleen [Department of Biomedical Engineering, Lerner Research Institute and Orthopaedic Research Center, Cleveland Clinic Foundation, Cleveland, OH (United States)], E-mail: walid.farhat@sickkids.ca

    2008-06-01

    Experimentally, porcine bladder acellular matrix (ACM) that mimics extracellular matrix has excellent potential as a bladder substitute. Herein we investigated the spatial localization and expression of different key cellular and extracellular proteins in the ACM; furthermore, we evaluated the inherent mechanical properties of the resultant ACM prior to implantation. Using a proprietary decellularization method, the DNA contents in both ACM and normal bladder were measured; in addition we used immunohistochemistry and western blots to quantify and localize the different cellular and extracellular components, and finally the mechanical testing was performed using a uniaxial mechanical testing machine. The mean DNA content in the ACM was significantly lower in the ACM compared to the bladder. Furthermore, the immunohistochemical and western blot analyses showed that collagen I and IV were preserved in the ACM, but possibly denatured collagen III in the ACM. Furthermore, elastin, laminin and fibronectin were mildly reduced in the ACM. Although the ACM did not exhibit nucleated cells, residual cellular components (actin, myosin, vimentin and others) were still present. There was, on the other hand, no significant difference in the mean stiffness between the ACM and the bladder. Although our decellularization method is effective in removing nuclear material from the bladder while maintaining its inherent mechanical properties, further work is mandatory to determine whether these residual DNA and cellular remnants would lead to any immune reaction, or if the mechanical properties of the ACM are preserved upon implantation and cellularization.

  15. Porcine bladder acellular matrix (ACM): protein expression, mechanical properties

    International Nuclear Information System (INIS)

    Farhat, Walid A; Chen Jun; Haig, Jennifer; Antoon, Roula; Litman, Jessica; Yeger, Herman; Sherman, Christopher; Derwin, Kathleen

    2008-01-01

    Experimentally, porcine bladder acellular matrix (ACM) that mimics extracellular matrix has excellent potential as a bladder substitute. Herein we investigated the spatial localization and expression of different key cellular and extracellular proteins in the ACM; furthermore, we evaluated the inherent mechanical properties of the resultant ACM prior to implantation. Using a proprietary decellularization method, the DNA contents in both ACM and normal bladder were measured; in addition we used immunohistochemistry and western blots to quantify and localize the different cellular and extracellular components, and finally the mechanical testing was performed using a uniaxial mechanical testing machine. The mean DNA content in the ACM was significantly lower in the ACM compared to the bladder. Furthermore, the immunohistochemical and western blot analyses showed that collagen I and IV were preserved in the ACM, but possibly denatured collagen III in the ACM. Furthermore, elastin, laminin and fibronectin were mildly reduced in the ACM. Although the ACM did not exhibit nucleated cells, residual cellular components (actin, myosin, vimentin and others) were still present. There was, on the other hand, no significant difference in the mean stiffness between the ACM and the bladder. Although our decellularization method is effective in removing nuclear material from the bladder while maintaining its inherent mechanical properties, further work is mandatory to determine whether these residual DNA and cellular remnants would lead to any immune reaction, or if the mechanical properties of the ACM are preserved upon implantation and cellularization

  16. Porcine bladder acellular matrix (ACM): protein expression, mechanical properties.

    Science.gov (United States)

    Farhat, Walid A; Chen, Jun; Haig, Jennifer; Antoon, Roula; Litman, Jessica; Sherman, Christopher; Derwin, Kathleen; Yeger, Herman

    2008-06-01

    Experimentally, porcine bladder acellular matrix (ACM) that mimics extracellular matrix has excellent potential as a bladder substitute. Herein we investigated the spatial localization and expression of different key cellular and extracellular proteins in the ACM; furthermore, we evaluated the inherent mechanical properties of the resultant ACM prior to implantation. Using a proprietary decellularization method, the DNA contents in both ACM and normal bladder were measured; in addition we used immunohistochemistry and western blots to quantify and localize the different cellular and extracellular components, and finally the mechanical testing was performed using a uniaxial mechanical testing machine. The mean DNA content in the ACM was significantly lower in the ACM compared to the bladder. Furthermore, the immunohistochemical and western blot analyses showed that collagen I and IV were preserved in the ACM, but possibly denatured collagen III in the ACM. Furthermore, elastin, laminin and fibronectin were mildly reduced in the ACM. Although the ACM did not exhibit nucleated cells, residual cellular components (actin, myosin, vimentin and others) were still present. There was, on the other hand, no significant difference in the mean stiffness between the ACM and the bladder. Although our decellularization method is effective in removing nuclear material from the bladder while maintaining its inherent mechanical properties, further work is mandatory to determine whether these residual DNA and cellular remnants would lead to any immune reaction, or if the mechanical properties of the ACM are preserved upon implantation and cellularization.

  17. CLIC-ACM: Acquisition and Control System

    CERN Document Server

    Bielawski, B; Magnoni, S

    2014-01-01

    CLIC [1] (Compact Linear Collider) is a world-wide collaboration to study the next terascale lepton collider, relying upon a very innovative concept of two-beamacceleration. In this scheme, the power is transported to the main accelerating structures by a primary electron beam. The Two Beam Module (TBM) is a compact integration with a high filling factor of all components: RF, Magnets, Instrumentation, Vacuum, Alignment and Stabilization. This paper describes the very challenging aspects of designing the compact system to serve as a dedicated Acquisition & Control Module (ACM) for all signals of the TBM. Very delicate conditions must be considered, in particular radiation doses that could reach several kGy in the tunnel. In such severe conditions shielding and hardened electronics will have to be taken into consideration. In addition, with more than 300 ADC&DAC channels per ACM and about 21000 ACMs in total, it appears clearly that power consumption will be an important issue. It is also obvious that...

  18. Porosity of porcine bladder acellular matrix: impact of ACM thickness.

    Science.gov (United States)

    Farhat, Walid; Chen, Jun; Erdeljan, Petar; Shemtov, Oren; Courtman, David; Khoury, Antoine; Yeger, Herman

    2003-12-01

    The objectives of this study are to examine the porosity of bladder acellular matrix (ACM) using deionized (DI) water as the model fluid and dextran as the indicator macromolecule, and to correlate the porosity to the ACM thickness. Porcine urinary bladders from pigs weighing 20-50 kg were sequentially extracted in detergent containing solutions, and to modify the ACM thickness, stretched bladders were acellularized in the same manner. Luminal and abluminal ACM specimens were subjected to fixed static DI water pressure (10 cm); and water passing through the specimens was collected at specific time interval. While for the macromolecule porosity testing, the diffusion rate and direction of 10,000 MW fluoroescein-labeled dextrans across the ACM specimens mounted in Ussing's chambers were measured. Both experiments were repeated on the thin stretched ACM. In both ACM types, the fluid porosity in both directions did not decrease with increased test duration (3 h); in addition, the abluminal surface was more porous to fluid than the luminal surface. On the other hand, when comparing thin to thick ACM, the porosity in either direction was higher in the thick ACM. Macromolecule porosity, as measured by absorbance, was higher for the abluminal thick ACM than the luminal side, but this characteristic was reversed in the thin ACM. Comparing thin to thick ACM, the luminal side in the thin ACM was more porous to dextran than in the thick ACM, but this characteristic was reversed for the abluminal side. The porcine bladder ACM possesses directional porosity and acellularizing stretched urinary bladders may increase structural density and alter fluid and macromolecule porosity. Copyright 2003 Wiley Periodicals, Inc. J Biomed Mater Res 67A: 970-974, 2003

  19. Representation of deforestation impacts on climate, water, and nutrient cycles in the ACME earth system model

    Science.gov (United States)

    Cai, X.; Riley, W. J.; Zhu, Q.

    2017-12-01

    Deforestation causes a series of changes to the climate, water, and nutrient cycles. Employing a state-of-the-art earth system model—ACME (Accelerated Climate Modeling for Energy), we comprehensively investigate the impacts of deforestation on these processes. We first assess the performance of the ACME Land Model (ALM) in simulating runoff, evapotranspiration, albedo, and plant productivity at 42 FLUXNET sites. The single column mode of ACME is then used to examine climate effects (temperature cooling/warming) and responses of runoff, evapotranspiration, and nutrient fluxes to deforestation. This approach separates local effects of deforestation from global circulation effects. To better understand the deforestation effects in a global context, we use the coupled (atmosphere, land, and slab ocean) mode of ACME to demonstrate the impacts of deforestation on global climate, water, and nutrient fluxes. Preliminary results showed that the land component of ACME has advantages in simulating these processes and that local deforestation has potentially large impacts on runoff and atmospheric processes.

  20. ACM CCS 2013-2015 Student Travel Support

    Science.gov (United States)

    2016-10-29

    ACM CCS 2013-2015 Student Travel Support Under the ARO funded effort titled “ACM CCS 2013-2015 Student Travel Support,” from 2013 to 2015, George...Computer and Communications Security (ACM CCS ). The views, opinions and/or findings contained in this report are those of the author(s) and should not...AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 travel grant, acm ccs REPORT

  1. AcmD, a homolog of the major autolysin AcmA of Lactococcus lactis, binds to the cell wall and contributes to cell separation and autolysis

    NARCIS (Netherlands)

    Visweswaran, Ganesh Ram R; Steen, Anton; Leenhouts, Kees; Szeliga, Monika; Ruban, Beata; Hesseling-Meinders, Anne; Dijkstra, Bauke W; Kuipers, Oscar P; Kok, Jan; Buist, Girbe

    2013-01-01

    Lactococcus lactis expresses the homologous glucosaminidases AcmB, AcmC, AcmA and AcmD. The latter two have three C-terminal LysM repeats for peptidoglycan binding. AcmD has much shorter intervening sequences separating the LysM repeats and a lower iso-electric point (4.3) than AcmA (10.3). Under

  2. Distributed Processing of SETI Data

    Science.gov (United States)

    Korpela, Eric

    As you have read in prior chapters, researchers have been performing progressively more sensitive SETI searches since 1960. Each search has been limited by the technologies available at the time. As radio frequency technologies have become more efficient and computers have become faster, the searches have increased in capacity and become more sensitive. Often the limits of the hardware that performs the calculations required to process the telescope data in order to expose any embedded signals is what limits the sensitivity of the search. Shortly before the start of the 21st century, projects began to appear that exploited the processing capabilities of computers connected to the Internet in order to solve problems that required a large amount of computing power. The SETI@home project, managed by myself and a group of researchers at the Space Sciences Laboratory of the University of California, Berkeley, was the first attempt to use large-scale distributed computing to solve the problems of performing a sensitive search for narrow band radio signals from extraterrestrial civilizations. (Korpela et al., 2001) A follow-on project, Astropulse, searches for extraterrestrial signals with wider bandwidths and shorter time durations. Both projects are ongoing at the present time (mid-2010).

  3. Air Traffic Complexity Measurement Environment (ACME): Software User's Guide

    Science.gov (United States)

    1996-01-01

    A user's guide for the Air Traffic Complexity Measurement Environment (ACME) software is presented. The ACME consists of two major components, a complexity analysis tool and user interface. The Complexity Analysis Tool (CAT) analyzes complexity off-line, producing data files which may be examined interactively via the Complexity Data Analysis Tool (CDAT). The Complexity Analysis Tool is composed of three independently executing processes that communicate via PVM (Parallel Virtual Machine) and Unix sockets. The Runtime Data Management and Control process (RUNDMC) extracts flight plan and track information from a SAR input file, and sends the information to GARP (Generate Aircraft Routes Process) and CAT (Complexity Analysis Task). GARP in turn generates aircraft trajectories, which are utilized by CAT to calculate sector complexity. CAT writes flight plan, track and complexity data to an output file, which can be examined interactively. The Complexity Data Analysis Tool (CDAT) provides an interactive graphic environment for examining the complexity data produced by the Complexity Analysis Tool (CAT). CDAT can also play back track data extracted from System Analysis Recording (SAR) tapes. The CDAT user interface consists of a primary window, a controls window, and miscellaneous pop-ups. Aircraft track and position data is displayed in the main viewing area of the primary window. The controls window contains miscellaneous control and display items. Complexity data is displayed in pop-up windows. CDAT plays back sector complexity and aircraft track and position data as a function of time. Controls are provided to start and stop playback, adjust the playback rate, and reposition the display to a specified time.

  4. De afschaffing van de bezwaarfase bij boetebesluiten van de ACM

    NARCIS (Netherlands)

    Jans, J.H.; Outhuijse, A.

    Per 1 maart 2013 ontstaat door samenvoeging van de NMa, de OPTA en de Consumentenautoriteit, de Autoriteit Consument en Markt. Om de ACM slagvaardig te laten functioneren, wordt voorgesteld het handhavingsinstrumentarium m.b.t. het markttoezicht van de ACM te vereenvoudigen. Eén van de voorstellen

  5. Understanding flexible and distributed software development processes

    OpenAIRE

    Agerfalk, Par J.; Fitzgerald, Brian

    2006-01-01

    peer-reviewed The minitrack on Flexible and Distributed Software Development Processes addresses two important and partially intertwined current themes in software development: process flexibility and globally distributed software development

  6. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...... result is used to study functional summaries for log Gaussian Cox processes....

  7. Transient Inverse Calibration of the Site-Wide Groundwater Flow Model (ACM-2): FY03 Progress Report

    International Nuclear Information System (INIS)

    Vermeul, Vince R.; Bergeron, Marcel P.; Cole, C R.; Murray, Christopher J.; Nichols, William E.; Scheibe, Timothy D.; Thorne, Paul D.; Waichler, Scott R.; Xie, YuLong

    2003-01-01

    DOE and PNNL are working to strengthen the technical defensibility of the groundwater flow and transport model at the Hanford Site and to incorporate uncertainty into the model. One aspect of the initiative is developing and using a three-dimensional transient inverse model to estimate the hydraulic conductivities, specific yields, and other parameters using data from Hanford since 1943. The focus of the alternative conceptual model (ACM-2) inverse modeling initiative documented in this report was to address limitations identified in the ACM-1 model, complete the facies-based approach for representing the hydraulic conductivity distribution in the Hanford and middle Ringold Formations, develop the approach and implementation methodology for generating multiple ACMs based on geostatistical data analysis, and develop an approach for inverse modeling of these stochastic ACMs. The primary modifications to ACM-2 transient inverse model include facies-based zonation of Units 1 (Hanford ) and 5 (middle Ringold); an improved approach for handling run-on recharge from upland areas based on watershed modeling results; an improved approach for representing artificial discharges from site operations; and minor changes to the geologic conceptual model. ACM-2 is the first attempt to fully incorporate the facies-based approach to represent the hydrogeologic structure. Further refinement and additional improvements to overall model fit will be realized during future inverse simulations of groundwater flow and transport. In addition, preliminary work was completed on an approach and implementation for generating an inverse modeling of stochastic ACMs. These techniques were applied to assess the uncertainty in the facies-based zonation of the Hanford formation and the geological structure of Ringold mud units. The geostatistical analysis used a preliminary interpretation of the facies-based zonation that was not consistent with that used in ACM-2. Although the overall objective of

  8. General distributions in process algebra

    NARCIS (Netherlands)

    Katoen, Joost P.; d' Argenio, P.R.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    2001-01-01

    This paper is an informal tutorial on stochastic process algebras, i.e., process calculi where action occurrences may be subject to a delay that is governed by a (mostly continuous) random variable. Whereas most stochastic process algebras consider delays determined by negative exponential

  9. Analysis on working pressure selection of ACME integral test facility

    International Nuclear Information System (INIS)

    Chen Lian; Chang Huajian; Li Yuquan; Ye Zishen; Qin Benke

    2011-01-01

    An integral effects test facility, advanced core cooling mechanism experiment facility (ACME) was designed to verify the performance of the passive safety system and validate its safety analysis codes of a pressurized water reactor power plant. Three test facilities for AP1000 design were introduced and review was given. The problems resulted from the different working pressures of its test facilities were analyzed. Then a detailed description was presented on the working pressure selection of ACME facility as well as its characteristics. And the approach of establishing desired testing initial condition was discussed. The selected 9.3 MPa working pressure covered almost all important passive safety system enables the ACME to simulate the LOCAs with the same pressure and property similitude as the prototype. It's expected that the ACME design would be an advanced core cooling integral test facility design. (authors)

  10. Web-Based Distributed XML Query Processing

    NARCIS (Netherlands)

    Smiljanic, M.; Feng, L.; Jonker, Willem; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Web-based distributed XML query processing has gained in importance in recent years due to the widespread popularity of XML on the Web. Unlike centralized and tightly coupled distributed systems, Web-based distributed database systems are highly unpredictable and uncontrollable, with a rather

  11. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This paper reviews useful results related to Palm distributions of spatial point processes and provides a new result regarding the characterization of Palm distributions for the class of log Gaussian Cox processes. This result is used to study functional summary statistics for a log Gaussian Cox...

  12. On Distributed Port-Hamiltonian Process Systems

    NARCIS (Netherlands)

    Lopezlena, Ricardo; Scherpen, Jacquelien M.A.

    2004-01-01

    In this paper we use the term distributed port-Hamiltonian Process Systems (DPHPS) to refer to the result of merging the theory of distributed Port-Hamiltonian systems (DPHS) with the theory of process systems (PS). Such concept is useful for combining the systematic interconnection of PHS with the

  13. Parallel and Distributed Data Processing Using Autonomous ...

    African Journals Online (AJOL)

    Looking at the distributed nature of these networks, data is processed by remote login or Remote Procedure Calls (RPC), this causes congestion in the network bandwidth. This paper proposes a framework where software agents are assigned duties to be processing the distributed data concurrently and assembling the ...

  14. Pomegranate MR images analysis using ACM and FCM algorithms

    Science.gov (United States)

    Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.

  15. Experiments to Distribute Map Generalization Processes

    Science.gov (United States)

    Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas

    2018-05-01

    Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.

  16. Formation of personality’s acme-qualities as a component of physical education specialists’ acmeological competence

    Directory of Open Access Journals (Sweden)

    T.Hr. Dereka

    2016-10-01

    Full Text Available Purpose: to determine characteristics of acme-qualities’ formation in physical education specialists and determine correlations between components. Material: in the research students of “Physical education” specialty (n=194 participated. For assessment personality’s qualities special tests were used. Organization abilities, communicative abilities, creative potential, demand in achievement, emotional information level, control of emotions and etc. were assessed. Results: we determined components of personality’s acme-competence component in physical education specialists. We found density and orientation of correlation and influence of acme-qualities on personality’s component. By the results of factorial analysis we grouped, classified components by four factors and created their visual picture. The accumulated percentage of the studied factors’ dispersion was determined. Conclusions: continuous professional training of physical education specialists on acme-principles resulted in formation of personality’s acme-qualities. They facilitate manifestation of personality’s activity in the process of professional formation and constant self-perfection.

  17. Proceedings of the ACM SIGIR Workshop ''Searching Spontaneous Conversational Speech''

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Oard, Douglas; Ordelman, Roeland J.F.; Raaijmakers, Stephan

    2007-01-01

    The Proceedings contain the contributions to the workshop on Searching Spontaneous Conversational Speech organized in conjunction with the 30th ACM SIGIR, Amsterdam 2007. The papers reflect some of the emerging focus areas and cross-cutting research topics, together addressing evaluation metrics,

  18. Preliminary proceedings of the 2001 ACM SIGPLAN Haskell workshop

    NARCIS (Netherlands)

    Hinze, R.

    2001-01-01

    This volume contains the preliminary proceedings of the 2001 ACM SIGPLAN Haskell Workshop, which was held on 2nd September 2001 in Firenze, Italy. The final proceedings will published by Elsevier Science as an issue of Electronic Notes in Theoretical Computer Science (Volume 59). The

  19. Automated software system for checking the structure and format of ACM SIG documents

    Science.gov (United States)

    Mirza, Arsalan Rahman; Sah, Melike

    2017-04-01

    Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.

  20. Proposed Expansion of Acme Landfill Operations.

    Science.gov (United States)

    1982-08-01

    hazardous waste ponds that use solar evaporation processes to dispose of liquid hazardous wastes have an indefinite life, the quantity of liquid that may...determined at a later date. The use of solar evaporation ponds, for example, would preclude the use of spreading and compaction equipment used for...pesticides in spray cans, residual chemical solvents in steel drums, herbicide residues on grass clippings, or organic wastes in disposable baby diapers , a

  1. News from the Library: A one-stop-shop for computing literature: ACM Digital Library

    CERN Multimedia

    CERN Library

    2011-01-01

    The Association for Computing Machinery, ACM, is the world’s largest educational and scientific computing society. Among others, the ACM provides the computing field's premier Digital Library and serves its members and the computing profession with leading-edge publications, conferences, and career resources.   ACM Digital Library is available to the CERN community. The most popular journal here at CERN is Communications of the ACM. However, the collection offers access to a series of other valuable important academic journals such as Journal of the ACM and even fulltext of a series of classical books. In addition, users have access to the ACM Guide to Computing Literature, the most comprehensive bibliographic database focusing on computing, integrated with ACM’s full-text articles and including features such as ACM Author Profile Pages - which provides bibliographic and bibliometric data for over 1,000,000 authors in the field. ACM Digital Library is an excellent com...

  2. Proceedings of the 2014 ACM international conference on Interactive experiences for TV and online video

    NARCIS (Netherlands)

    P. Olivier (Patrick); P. Wright; T. Bartindale; M. Obrist (Marianna); P.S. Cesar Garcia (Pablo Santiago); S. Basapur

    2014-01-01

    htmlabstractIt is our great pleasure to introduce the 2014 ACM International Conference on Interactive Experiences for Television and Online Video -- ACM TVX 2014. ACM TVX is a leading annual conference that brings together international researchers and practitioners from a wide range of

  3. Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness

    DEFF Research Database (Denmark)

    These proceedings contain the papers selected for presentation at the Second International Workshop on Indoor Spatial Awareness, hosted by ACM SIGSPATIAL and held in conjunction with the18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS...

  4. 76 FR 64943 - Proposed Cercla Administrative Cost Recovery Settlement; ACM Smelter and Refinery Site, Located...

    Science.gov (United States)

    2011-10-19

    ... Settlement; ACM Smelter and Refinery Site, Located in Cascade County, MT AGENCY: Environmental Protection... projected future response costs concerning the ACM Smelter and Refinery NPL Site (Site), Operable Unit 1..., Helena, MT 59626. Mr. Sturn can be reached at (406) 457-5027. Comments should reference the ACM Smelter...

  5. Fouling distribution in forward osmosis membrane process.

    Science.gov (United States)

    Lee, Junseok; Kim, Bongchul; Hong, Seungkwan

    2014-06-01

    Fouling behavior along the length of membrane module was systematically investigated by performing simple modeling and lab-scale experiments of forward osmosis (FO) membrane process. The flux distribution model developed in this study showed a good agreement with experimental results, validating the robustness of the model. This model demonstrated, as expected, that the permeate flux decreased along the membrane channel due to decreasing osmotic pressure differential across the FO membrane. A series of fouling experiments were conducted under the draw and feed solutions at various recoveries simulated by the model. The simulated fouling experiments revealed that higher organic (alginate) fouling and thus more flux decline were observed at the last section of a membrane channel, as foulants in feed solution became more concentrated. Furthermore, the water flux in FO process declined more severely as the recovery increased due to more foulants transported to membrane surface with elevated solute concentrations at higher recovery, which created favorable solution environments for organic adsorption. The fouling reversibility also decreased at the last section of the membrane channel, suggesting that fouling distribution on FO membrane along the module should be carefully examined to improve overall cleaning efficiency. Lastly, it was found that such fouling distribution observed with co-current flow operation became less pronounced in counter-current flow operation of FO membrane process. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  6. Distributed Aerodynamic Sensing and Processing Toolbox

    Science.gov (United States)

    Brenner, Martin; Jutte, Christine; Mangalam, Arun

    2011-01-01

    A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.

  7. Distributed data processing for public health surveillance

    Directory of Open Access Journals (Sweden)

    Yih Katherine

    2006-09-01

    Full Text Available Abstract Background Many systems for routine public health surveillance rely on centralized collection of potentially identifiable, individual, identifiable personal health information (PHI records. Although individual, identifiable patient records are essential for conditions for which there is mandated reporting, such as tuberculosis or sexually transmitted diseases, they are not routinely required for effective syndromic surveillance. Public concern about the routine collection of large quantities of PHI to support non-traditional public health functions may make alternative surveillance methods that do not rely on centralized identifiable PHI databases increasingly desirable. Methods The National Bioterrorism Syndromic Surveillance Demonstration Program (NDP is an example of one alternative model. All PHI in this system is initially processed within the secured infrastructure of the health care provider that collects and holds the data, using uniform software distributed and supported by the NDP. Only highly aggregated count data is transferred to the datacenter for statistical processing and display. Results Detailed, patient level information is readily available to the health care provider to elucidate signals observed in the aggregated data, or for ad hoc queries. We briefly describe the benefits and disadvantages associated with this distributed processing model for routine automated syndromic surveillance. Conclusion For well-defined surveillance requirements, the model can be successfully deployed with very low risk of inadvertent disclosure of PHI – a feature that may make participation in surveillance systems more feasible for organizations and more appealing to the individuals whose PHI they hold. It is possible to design and implement distributed systems to support non-routine public health needs if required.

  8. Highlights from ACM SIGSPATIAL GIS 2011: the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems: (Chicago, Illinois - November 1 - 4, 2011)

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Ofek, Eyal; Tanin, Egemen

    2012-01-01

    ACM SIGSPATIAL GIS 2011 was the 19th gathering of the premier event on spatial information and Geographic Information Systems (GIS). It is also the fourth year that the conference was held under the auspices of ACM's most recent special interest group, SIGSPATIAL. Since its start in 1993, the con...

  9. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  10. AVIRIS and TIMS data processing and distribution at the land processes distributed active archive center

    Science.gov (United States)

    Mah, G. R.; Myers, J.

    1993-01-01

    The U.S. Government has initiated the Global Change Research program, a systematic study of the Earth as a complete system. NASA's contribution of the Global Change Research Program is the Earth Observing System (EOS), a series of orbital sensor platforms and an associated data processing and distribution system. The EOS Data and Information System (EOSDIS) is the archiving, production, and distribution system for data collected by the EOS space segment and uses a multilayer architecture for processing, archiving, and distributing EOS data. The first layer consists of the spacecraft ground stations and processing facilities that receive the raw data from the orbiting platforms and then separate the data by individual sensors. The second layer consists of Distributed Active Archive Centers (DAAC) that process, distribute, and archive the sensor data. The third layer consists of a user science processing network. The EOSDIS is being developed in a phased implementation. The initial phase, Version 0, is a prototype of the operational system. Version 0 activities are based upon existing systems and are designed to provide an EOSDIS-like capability for information management and distribution. An important science support task is the creation of simulated data sets for EOS instruments from precursor aircraft or satellite data. The Land Processes DAAC, at the EROS Data Center (EDC), is responsible for archiving and processing EOS precursor data from airborne instruments such as the Thermal Infrared Multispectral Scanner (TIMS), the Thematic Mapper Simulator (TMS), and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS). AVIRIS, TIMS, and TMS are flown by the NASA-Ames Research Center ARC) on an ER-2. The ER-2 flies at 65000 feet and can carry up to three sensors simultaneously. Most jointly collected data sets are somewhat boresighted and roughly registered. The instrument data are being used to construct data sets that simulate the spectral and spatial

  11. Distributed resource management across process boundaries

    KAUST Repository

    Suresh, Lalith

    2017-09-27

    Multi-tenant distributed systems composed of small services, such as Service-oriented Architectures (SOAs) and Micro-services, raise new challenges in attaining high performance and efficient resource utilization. In these systems, a request execution spans tens to thousands of processes, and the execution paths and resource demands on different services are generally not known when a request first enters the system. In this paper, we highlight the fundamental challenges of regulating load and scheduling in SOAs while meeting end-to-end performance objectives on metrics of concern to both tenants and operators. We design Wisp, a framework for building SOAs that transparently adapts rate limiters and request schedulers system-wide according to operator policies to satisfy end-to-end goals while responding to changing system conditions. In evaluations against production as well as synthetic workloads, Wisp successfully enforces a range of end-to-end performance objectives, such as reducing average latencies, meeting deadlines, providing fairness and isolation, and avoiding system overload.

  12. Distributed resource management across process boundaries

    KAUST Repository

    Suresh, Lalith; Bodik, Peter; Menache, Ishai; Canini, Marco; Ciucu, Florin

    2017-01-01

    Multi-tenant distributed systems composed of small services, such as Service-oriented Architectures (SOAs) and Micro-services, raise new challenges in attaining high performance and efficient resource utilization. In these systems, a request execution spans tens to thousands of processes, and the execution paths and resource demands on different services are generally not known when a request first enters the system. In this paper, we highlight the fundamental challenges of regulating load and scheduling in SOAs while meeting end-to-end performance objectives on metrics of concern to both tenants and operators. We design Wisp, a framework for building SOAs that transparently adapts rate limiters and request schedulers system-wide according to operator policies to satisfy end-to-end goals while responding to changing system conditions. In evaluations against production as well as synthetic workloads, Wisp successfully enforces a range of end-to-end performance objectives, such as reducing average latencies, meeting deadlines, providing fairness and isolation, and avoiding system overload.

  13. Parallel and distributed processing: applications to power systems

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Felix; Murphy, Liam [California Univ., Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1994-12-31

    Applications of parallel and distributed processing to power systems problems are still in the early stages. Rapid progress in computing and communications promises a revolutionary increase in the capacity of distributed processing systems. In this paper, the state-of-the art in distributed processing technology and applications is reviewed and future trends are discussed. (author) 14 refs.,1 tab.

  14. Contribution of the collagen adhesin Acm to pathogenesis of Enterococcus faecium in experimental endocarditis.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Singh, Kavindra V; Murray, Barbara E

    2008-09-01

    Enterococcus faecium is a multidrug-resistant opportunist causing difficult-to-treat nosocomial infections, including endocarditis, but there are no reports experimentally demonstrating E. faecium virulence determinants. Our previous studies showed that some clinical E. faecium isolates produce a cell wall-anchored collagen adhesin, Acm, and that an isogenic acm deletion mutant of the endocarditis-derived strain TX0082 lost collagen adherence. In this study, we show with a rat endocarditis model that TX0082 Deltaacm::cat is highly attenuated versus wild-type TX0082, both in established (72 h) vegetations (P Acm the first factor shown to be important for E. faecium pathogenesis. In contrast, no mortality differences were observed in a mouse peritonitis model. While 5 of 17 endocarditis isolates were Acm nonproducers and failed to adhere to collagen in vitro, all had an intact, highly conserved acm locus. Highly reduced acm mRNA levels (>or=50-fold reduction relative to an Acm producer) were found in three of these five nonadherent isolates, including the sequenced strain TX0016, by quantitative reverse transcription-PCR, indicating that acm transcription is downregulated in vitro in these isolates. However, examination of TX0016 cells obtained directly from infected rat vegetations by flow cytometry showed that Acm was present on 40% of cells grown during infection. Finally, we demonstrated a significant reduction in E. faecium collagen adherence by affinity-purified anti-Acm antibodies from E. faecium endocarditis patient sera, suggesting that Acm may be a potential immunotarget for strategies to control this emerging pathogen.

  15. Autolysis of Lactococcus lactis caused by induced overproduction of its major autolysin, AcmA

    NARCIS (Netherlands)

    Buist, Girbe; Karsens, H; Nauta, A; van Sinderen, D; Venema, G; Kok, J

    The optical density of a culture of Lactococcus lactis MG1363 was reduced more than 60% during prolonged stationary phase, Reduction in optical density (autolysis) was almost absent in a culture of an isogenic mutant containing a deletion in the major autolysin gene, acmA. An acmA mutant carrying

  16. Study on the percent of frequency of ACME-Arca in clinical isolates ...

    African Journals Online (AJOL)

    ACME is a mobile element of Arginine catabolic in Staphylococcus epidermidis that codes specific virulence factors. The purpose of this study was to examine the specific features and prevalence of ACME-arcA in the isolates of Staphylococcus epidermidis resistant to Methicillin isolated by clinical samples in Isfahan.

  17. Distributed Real-Time Embedded Video Processing

    National Research Council Canada - National Science Library

    Lv, Tiehan

    2004-01-01

    .... A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements...

  18. Preventive maintenance plan of the air-conditioning duct using the ACM-sensor

    International Nuclear Information System (INIS)

    Fukuba, Kazushi; Ito, Takanobu; Kojima, Akiko; Tanji, Kazuhiro; Sato, Yuki

    2013-01-01

    Air-conditioning duct is difficult to predict the date to occur of corrosion such as affect the function. Therefore, the current conservation method is mostly corrective maintenance. Therefore, we used the test pieces of six types and ACM-sensor in order to solve the corrosion speed from corrosion environment and relationship of corrosion quantity of test pieces. In addition, was used the duct molded articles various in order to check the corrosion degree of when processed the duct. As a result, we were selected crust body constituting a duct and optimal combination of the flange by solve the corrosion speed of the test pieces various. Thus, it performs preventive disposal before to occur of corrosion such as affect the function by predicting the duct life from corrosion speed, and lead to stability and safe operating by appropriate maintenance of equipment. (author)

  19. ARM Airborne Carbon Measurements VI (ARM-ACME VI) Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, Sebastien [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-05-01

    From October 1, 2015 through September 30, 2016, AAF deployed a Cessna 206 aircraft over the Southern Great Plains, collecting observations of trace gas mixing ratios over the ARM/SGP Central Facility. The aircraft payload included two Atmospheric Observing Systems (AOS Inc.) analyzers for continuous measurements of CO2, and a 12-flask sampler for analysis of carbon cycle gases (CO2, CO, CH4, N2O, 13CO2). The aircraft payload also includes solar/infrared radiation measurements. This research (supported by DOE ARM and TES programs) builds upon previous ARM-ACME missions. The goal of these measurements is to improve understanding of: (a) the carbon exchange of the ARM region; (b) how CO2 and associated water and energy fluxes influence radiative forcing, convective processes, and CO2 concentrations over the ARM region, and (c) how greenhouse gases are transported on continental scales.

  20. A tutorial on Palm distributions for spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...

  1. Multivariate semi-logistic distribution and processes | Umar | Journal ...

    African Journals Online (AJOL)

    Multivariate semi-logistic distribution is introduced and studied. Some characterizations properties of multivariate semi-logistic distribution are presented. First order autoregressive minification processes and its generalization to kth order autoregressive minification processes with multivariate semi-logistic distribution as ...

  2. A Novel Observation-Guided Approach for Evaluating Mesoscale Convective Systems Simulated by the DOE ACME Model

    Science.gov (United States)

    Feng, Z.; Ma, P. L.; Hardin, J. C.; Houze, R.

    2017-12-01

    Mesoscale convective systems (MCSs) are the largest type of convective storms that develop when convection aggregates and induces mesoscale circulation features. Over North America, MCSs contribute over 60% of the total warm-season precipitation and over half of the extreme daily precipitation in the central U.S. Our recent study (Feng et al. 2016) found that the observed increases in springtime total and extreme rainfall in this region are dominated by increased frequency and intensity of long-lived MCSs*. To date, global climate models typically do not run at a resolution high enough to explicitly simulate individual convective elements and may not have adequate process representations for MCSs, resulting in a large deficiency in projecting changes of the frequency of extreme precipitation events in future climate. In this study, we developed a novel observation-guided approach specifically designed to evaluate simulated MCSs in the Department of Energy's climate model, Accelerated Climate Modeling for Energy (ACME). The ACME model has advanced treatments for convection and subgrid variability and for this study is run at 25 km and 100 km grid spacings. We constructed a robust MCS database consisting of over 500 MCSs from 3 warm-season observations by applying a feature-tracking algorithm to 4-km resolution merged geostationary satellite and 3-D NEXRAD radar network data over the Continental US. This high-resolution MCS database is then down-sampled to the 25 and 100 km ACME grids to re-characterize key MCS properties. The feature-tracking algorithm is adapted with the adjusted characteristics to identify MCSs from ACME model simulations. We demonstrate that this new analysis framework is useful for evaluating ACME's warm-season precipitation statistics associated with MCSs, and provides insights into the model process representations related to extreme precipitation events for future improvement. *Feng, Z., L. R. Leung, S. Hagos, R. A. Houze, C. D. Burleyson

  3. Cumulative processes and quark distribution in nuclei

    International Nuclear Information System (INIS)

    Kondratyuk, L.; Shmatikov, M.

    1984-01-01

    Assuming existence of multiquark (mainly 12q) bags in nuclei the spectra of cumulative nucleons and mesons produced in high-energy particle-nucleus collisions are discussed. The exponential form of quark momentum distribution in 12q-bag (agreeing well with the experimental data on lepton-nucleus interactions at large q 2 ) is shown to result in quasi-exponential distribution of cumulative particles over the light-cone variable αsub(B). The dependence of f(αsub(B); psub(perpendicular)) (where psub(perpendicular) is the transverse momentum of the bag) upon psub(perpendicular) is considered. The yields of cumulative resonances as well as effects related to the u- and d-quark distributions in N > Z nuclei being different are dicscussed

  4. acme: The Amendable Coal-Fire Modeling Exercise. A C++ Class Library for the Numerical Simulation of Coal-Fires

    Science.gov (United States)

    Wuttke, Manfred W.

    2017-04-01

    At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in

  5. Signal processing for distributed readout using TESs

    International Nuclear Information System (INIS)

    Smith, Stephen J.; Whitford, Chris H.; Fraser, George W.

    2006-01-01

    We describe optimal filtering algorithms for determining energy and position resolution in position-sensitive Transition Edge Sensor (TES) Distributed Read-Out Imaging Devices (DROIDs). Improved algorithms, developed using a small-signal finite-element model, are based on least-squares minimisation of the total noise power in the correlated dual TES DROID. Through numerical simulations we show that significant improvements in energy and position resolution are theoretically possible over existing methods

  6. VANET '13: Proceeding of the Tenth ACM International Workshop on Vehicular Inter-networking, Systems, and Applications

    NARCIS (Netherlands)

    Gozalvez, J.; Kargl, Frank; Mittag, J.; Kravets, R.; Tsai, M.; Unknown, [Unknown

    This year marks a very important date for the ACM international workshop on Vehicular inter-networking, systems, and applications as ACM VANET celebrates now its 10th edition. Starting in 2004 as "ACM international workshop on Vehicular ad hoc networks" already the change in title indicates that

  7. Characterization of a Novel Arginine Catabolic Mobile Element (ACME) and Staphylococcal Chromosomal Cassette mec Composite Island with Significant Homology to Staphylococcus epidermidis ACME type II in Methicillin-Resistant Staphylococcus aureus Genotype ST22-MRSA-IV.

    LENUS (Irish Health Repository)

    Shore, Anna C

    2011-02-22

    The arginine catabolic mobile element (ACME) is prevalent among ST8-MRSA-IVa (USA300) isolates and evidence suggests that ACME enhances the ability of ST8-MRSA-IVa to grow and survive on its host. ACME has been identified in a small number of isolates belonging to other MRSA clones but is widespread among coagulase-negative staphylococci (CoNS). This study reports the first description of ACME in two distinct strains of the pandemic ST22-MRSA-IV clone. A total of 238 MRSA isolates recovered in Ireland between 1971 and 2008 were investigated for ACME using a DNA microarray. Twenty-three isolates (9.7%) were ACME-positive, all were either MRSA genotype ST8-MRSA-IVa (7\\/23, 30%) or ST22-MRSA-IV (16\\/23, 70%). Whole-genome sequencing and comprehensive molecular characterization revealed the presence of a novel 46-kb ACME and SCCmec composite island (ACME\\/SCCmec-CI) in ST22-MRSA-IVh isolates (n = 15). This ACME\\/SCCmec-CI consists of a 12-kb DNA region previously identified in ACME type II in S. epidermidis ATCC 12228, a truncated copy of the J1 region of SCCmec I and a complete SCCmec IVh element. The composite island has a novel genetic organization with ACME located within orfX and SCCmec located downstream of ACME. One pvl-positive ST22-MRSA-IVa isolate carried ACME located downstream of SCCmec IVa as previously described in ST8-MRSA-IVa. These results suggest that ACME has been acquired by ST22-MRSA-IV on two independent occasions. At least one of these instances may have involved horizontal transfer and recombination events between MRSA and CoNS. The presence of ACME may enhance dissemination of ST22-MRSA-IV, an already successful MRSA clone.

  8. Numerical simulation of distributed parameter processes

    CERN Document Server

    Colosi, Tiberiu; Unguresan, Mihaela-Ligia; Muresan, Vlad

    2013-01-01

    The present monograph defines, interprets and uses the matrix of partial derivatives of the state vector with applications for the study of some common categories of engineering. The book covers broad categories of processes that are formed by systems of partial derivative equations (PDEs), including systems of ordinary differential equations (ODEs). The work includes numerous applications specific to Systems Theory based on Mpdx, such as parallel, serial as well as feed-back connections for the processes defined by PDEs. For similar, more complex processes based on Mpdx with PDEs and ODEs as components, we have developed control schemes with PID effects for the propagation phenomena, in continuous media (spaces) or discontinuous ones (chemistry, power system, thermo-energetic) or in electro-mechanics (railway – traction) and so on. The monograph has a purely engineering focus and is intended for a target audience working in extremely diverse fields of application (propagation phenomena, diffusion, hydrodyn...

  9. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  10. Modelling aspects of distributed processing in telecommunication networks

    NARCIS (Netherlands)

    Tomasgard, A; Audestad, JA; Dye, S; Stougie, L; van der Vlerk, MH; Wallace, SW

    1998-01-01

    The purpose of this paper is to formally describe new optimization models for telecommunication networks with distributed processing. Modem distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in

  11. Inhibition of Enterococcus faecium adherence to collagen by antibodies against high-affinity binding subdomains of Acm.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Sillanpää, Jouko; Ganesh, Vannakambadi K; Höök, Magnus; Murray, Barbara E

    2007-06-01

    Strains of Enterococcus faecium express a cell wall-anchored protein, Acm, which mediates adherence to collagen. Here, we (i) identify the minimal and high-affinity binding subsegments of Acm and (ii) show that anti-Acm immunoglobulin Gs (IgGs) purified against these subsegments reduced E. faecium TX2535 strain collagen adherence up to 73 and 50%, respectively, significantly more than the total IgGs against the full-length Acm A domain (28%) (P Acm adherence with functional subsegment-specific antibodies raises the possibility of their use as therapeutic or prophylactic agents.

  12. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  13. CNTF-ACM promotes mitochondrial respiration and oxidative stress in cortical neurons through upregulating L-type calcium channel activity.

    Science.gov (United States)

    Sun, Meiqun; Liu, Hongli; Xu, Huanbai; Wang, Hongtao; Wang, Xiaojing

    2016-09-01

    A specialized culture medium termed ciliary neurotrophic factor-treated astrocyte-conditioned medium (CNTF-ACM) allows investigators to assess the peripheral effects of CNTF-induced activated astrocytes upon cultured neurons. CNTF-ACM has been shown to upregulate neuronal L-type calcium channel current activity, which has been previously linked to changes in mitochondrial respiration and oxidative stress. Therefore, the aim of this study was to evaluate CNTF-ACM's effects upon mitochondrial respiration and oxidative stress in rat cortical neurons. Cortical neurons, CNTF-ACM, and untreated control astrocyte-conditioned medium (UC-ACM) were prepared from neonatal Sprague-Dawley rat cortical tissue. Neurons were cultured in either CNTF-ACM or UC-ACM for a 48-h period. Changes in the following parameters before and after treatment with the L-type calcium channel blocker isradipine were assessed: (i) intracellular calcium levels, (ii) mitochondrial membrane potential (ΔΨm), (iii) oxygen consumption rate (OCR) and adenosine triphosphate (ATP) formation, (iv) intracellular nitric oxide (NO) levels, (v) mitochondrial reactive oxygen species (ROS) production, and (vi) susceptibility to the mitochondrial complex I toxin rotenone. CNTF-ACM neurons displayed the following significant changes relative to UC-ACM neurons: (i) increased intracellular calcium levels (p ACM (p ACM promotes mitochondrial respiration and oxidative stress in cortical neurons through elevating L-type calcium channel activity.

  14. Design of distributed systems of hydrolithosphere processes management. A synthesis of distributed management systems

    Science.gov (United States)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.

  15. The acellular matrix (ACM) for bladder tissue engineering: A quantitative magnetic resonance imaging study.

    Science.gov (United States)

    Cheng, Hai-Ling Margaret; Loai, Yasir; Beaumont, Marine; Farhat, Walid A

    2010-08-01

    Bladder acellular matrices (ACMs) derived from natural tissue are gaining increasing attention for their role in tissue engineering and regeneration. Unlike conventional scaffolds based on biodegradable polymers or gels, ACMs possess native biomechanical and many acquired biologic properties. Efforts to optimize ACM-based scaffolds are ongoing and would be greatly assisted by a noninvasive means to characterize scaffold properties and monitor interaction with cells. MRI is well suited to this role, but research with MRI for scaffold characterization has been limited. This study presents initial results from quantitative MRI measurements for bladder ACM characterization and investigates the effects of incorporating hyaluronic acid, a natural biomaterial useful in tissue-engineering and regeneration. Measured MR relaxation times (T(1), T(2)) and diffusion coefficient were consistent with increased water uptake and glycosaminoglycan content observed on biochemistry in hyaluronic acid ACMs. Multicomponent MRI provided greater specificity, with diffusion data showing an acellular environment and T(2) components distinguishing the separate effects of increased glycosaminoglycans and hydration. These results suggest that quantitative MRI may provide useful information on matrix composition and structure, which is valuable in guiding further development using bladder ACMs for organ regeneration and in strategies involving the use of hyaluronic acid.

  16. Point processes and the position distribution of infinite boson systems

    International Nuclear Information System (INIS)

    Fichtner, K.H.; Freudenberg, W.

    1987-01-01

    It is shown that to each locally normal state of a boson system one can associate a point process that can be interpreted as the position distribution of the state. The point process contains all information one can get by position measurements and is determined by the latter. On the other hand, to each so-called Σ/sup c/-point process Q they relate a locally normal state with position distribution Q

  17. Processing and Distribution of STO2 Data

    Science.gov (United States)

    Goldsmith, Paul

    We propose in this ADAP to reduce the data obtained in the December 2016 flight of the STO2 Antarctic Balloon observatory. In just over 20 days of taking data, STO2 observed over 2.5 square degrees of the inner Milky Way in the 1900 GHz (158 m) fine structure line of ionized carbon ([CII]). This includes over 320,000 spectra with velocity resolution of 0.16 km/s and angular resolution 1 . In common with the higher bands of the Herschel HIFI instrument that also employed hot electron bolometer (HEB) mixers, there are significant baseline issues with the data that make reduction a significant challenge. Due to the year’s postponement of STO2 launch due to weather in 2015/16 season, funds for data analysis were largely redirected to support the team who enabled the successful launch and flight. A supplementary focused effort is thus needed to make STO2 data readily usable by the astronomical community, which is what we propose here. This ADAP will be a two-year program, including the following steps:: (1) Refine and optimize algorithms for excision of bad channels, correction for receiver gain changes, removal of variable bad baselines, final baseline adjustment, and verification of calibration. (2) Develop and integrated pipeline incorporating the optimized algorithms; process entire STO2 data set using the pipeline, and make an initial release of the data (DR1) to the public. (3) Refine data calibration including ancillary data sets coincident with the STO2 fields, make the data VO-compliant. (4) Write documentation for the pipeline and publish in appropriate journal; release final second data release (DR2) to the public, and hand off to permanent data repositories the NASA/IPAC IRSA database and the Harvard University Dataverse, and Cyverse, led by the University of Arizona. Members of the STO2 data reduction team have extensive experience with HIFI data, and particularly with the HEB fine structure spectra. We are thus confident that we can build on this

  18. Transparent checkpointing and process migration in a distributed system

    OpenAIRE

    2004-01-01

    A distributed system for creating a checkpoint for a plurality of processes running on the distributed system. The distributed system includes a plurality of compute nodes with an operating system executing on each compute node. A checkpoint library resides at the user level on each of the compute nodes, and the checkpoint library is transparent to the operating system residing on the same compute node and to the other compute nodes. Each checkpoint library uses a windowed messaging logging p...

  19. Tempered stable distributions stochastic models for multiscale processes

    CERN Document Server

    Grabchak, Michael

    2015-01-01

    This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions.  A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.

  20. Limiting conditional distributions for birth-death processes

    NARCIS (Netherlands)

    Kijima, M.; Nair, M.G.; Pollett, P.K.; van Doorn, Erik A.

    1997-01-01

    In a recent paper one of us identified all of the quasi-stationary distributions for a non-explosive, evanescent birth-death process for which absorption is certain, and established conditions for the existence of the corresponding limiting conditional distributions. Our purpose is to extend these

  1. Aspects Concerning the Optimization of Authentication Process for Distributed Applications

    Directory of Open Access Journals (Sweden)

    Ion Ivan

    2008-06-01

    Full Text Available There will be presented the types of distributed applications. The quality characteristics for distributed applications will be analyzed. There will be established the ways to assign access rights. The authentication category will be analyzed. We will propose an algorithm for the optimization of authentication process.For the application “Evaluation of TIC projects” the algorithm proposed will be applied.

  2. On relation between distribution functions in hard and soft processes

    International Nuclear Information System (INIS)

    Kisselev, A.V.; Petrov, V.A.

    1992-10-01

    It is shown that in the particle-exchange model the hadron-hadron scattering amplitude admits parton-like representation with the distribution functions coinciding with those extracted from deep inelastic processes. (author). 13 refs

  3. Accelerated life testing design using geometric process for pareto distribution

    OpenAIRE

    Mustafa Kamal; Shazia Zarrin; Arif Ul Islam

    2013-01-01

    In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...

  4. SUPERPOSITION OF STOCHASTIC PROCESSES AND THE RESULTING PARTICLE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Schwadron, N. A.; Dayeh, M. A.; Desai, M.; Fahr, H.; Jokipii, J. R.; Lee, M. A.

    2010-01-01

    Many observations of suprathermal and energetic particles in the solar wind and the inner heliosheath show that distribution functions scale approximately with the inverse of particle speed (v) to the fifth power. Although there are exceptions to this behavior, there is a growing need to understand why this type of distribution function appears so frequently. This paper develops the concept that a superposition of exponential and Gaussian distributions with different characteristic speeds and temperatures show power-law tails. The particular type of distribution function, f ∝ v -5 , appears in a number of different ways: (1) a series of Poisson-like processes where entropy is maximized with the rates of individual processes inversely proportional to the characteristic exponential speed, (2) a series of Gaussian distributions where the entropy is maximized with the rates of individual processes inversely proportional to temperature and the density of individual Gaussian distributions proportional to temperature, and (3) a series of different diffusively accelerated energetic particle spectra with individual spectra derived from observations (1997-2002) of a multiplicity of different shocks. Thus, we develop a proof-of-concept for the superposition of stochastic processes that give rise to power-law distribution functions.

  5. Mistaking geography for biology: inferring processes from species distributions.

    Science.gov (United States)

    Warren, Dan L; Cardillo, Marcel; Rosauer, Dan F; Bolnick, Daniel I

    2014-10-01

    Over the past few decades, there has been a rapid proliferation of statistical methods that infer evolutionary and ecological processes from data on species distributions. These methods have led to considerable new insights, but they often fail to account for the effects of historical biogeography on present-day species distributions. Because the geography of speciation can lead to patterns of spatial and temporal autocorrelation in the distributions of species within a clade, this can result in misleading inferences about the importance of deterministic processes in generating spatial patterns of biodiversity. In this opinion article, we discuss ways in which patterns of species distributions driven by historical biogeography are often interpreted as evidence of particular evolutionary or ecological processes. We focus on three areas that are especially prone to such misinterpretations: community phylogenetics, environmental niche modelling, and analyses of beta diversity (compositional turnover of biodiversity). Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  6. Characteristics of the Audit Processes for Distributed Informatics Systems

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2009-01-01

    Full Text Available The paper contains issues regarding: main characteristics and examples of the distributed informatics systems and main difference categories among them, concepts, principles, techniques and fields for auditing the distributed informatics systems, concepts and classes of the standard term, characteristics of this one, examples of standards, guidelines, procedures and controls for auditing the distributed informatics systems. The distributed informatics systems are characterized by the following issues: development process, resources, implemented functionalities, architectures, system classes, particularities. The audit framework has two sides: the audit process and auditors. The audit process must be led in accordance with the standard specifications in the IT&C field. The auditors must meet the ethical principles and they must have a high-level of professional skills and competence in IT&C field.

  7. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  8. Wigner Ville Distribution in Signal Processing, using Scilab Environment

    Directory of Open Access Journals (Sweden)

    Petru Chioncel

    2011-01-01

    Full Text Available The Wigner Ville distribution offers a visual display of quantitative information about the way a signal’s energy is distributed in both, time and frequency. Through that, this distribution embodies the fundamentally concepts of the Fourier and time-domain analysis. The energy of the signal is distributed so that specific frequencies are localized in time by the group delay time and at specifics instants in time the frequency is given by the instantaneous frequency. The net positive volum of the Wigner distribution is numerically equal to the signal’s total energy. The paper shows the application of the Wigner Ville distribution, in the field of signal processing, using Scilab environment.

  9. Distributed learning process: principles of design and implementation

    Directory of Open Access Journals (Sweden)

    G. N. Boychenko

    2016-01-01

    Full Text Available At the present stage, broad information and communication technologies (ICT usage in educational practices is one of the leading trends of global education system development. This trend has led to the instructional interaction models transformation. Scientists have developed the theory of distributed cognition (Salomon, G., Hutchins, E., and distributed education and training (Fiore, S. M., Salas, E., Oblinger, D. G., Barone, C. A., Hawkins, B. L.. Educational process is based on two separated in time and space sub-processes of learning and teaching which are aimed at the organization of fl exible interactions between learners, teachers and educational content located in different non-centralized places.The purpose of this design research is to fi nd a solution for the problem of formalizing distributed learning process design and realization that is signifi cant in instructional design. The solution to this problem should take into account specifi cs of distributed interactions between team members, which becomes collective subject of distributed cognition in distributed learning process. This makes it necessary to design roles and functions of the individual team members performing distributed educational activities. Personal educational objectives should be determined by decomposition of team objectives into functional roles of its members with considering personal and learning needs and interests of students.Theoretical and empirical methods used in the study: theoretical analysis of philosophical, psychological, and pedagogical literature on the issue, analysis of international standards in the e-learning domain; exploration on practical usage of distributed learning in academic and corporate sectors; generalization, abstraction, cognitive modelling, ontology engineering methods.Result of the research is methodology for design and implementation of distributed learning process based on the competency approach. Methodology proposed by

  10. Parallel and distributed processing in power system simulation and control

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, Djalma M [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1994-12-31

    Recent advances in computer technology will certainly have a great impact in the methodologies used in power system expansion and operational planning as well as in real-time control. Parallel and distributed processing are among the new technologies that present great potential for application in these areas. Parallel computers use multiple functional or processing units to speed up computation while distributed processing computer systems are collection of computers joined together by high speed communication networks having many objectives and advantages. The paper presents some ideas for the use of parallel and distributed processing in power system simulation and control. It also comments on some of the current research work in these topics and presents a summary of the work presently being developed at COPPE. (author) 53 refs., 2 figs.

  11. Parallel Distributed Processing theory in the age of deep networks

    OpenAIRE

    Bowers, Jeffrey

    2017-01-01

    Parallel Distributed Processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely, that all knowledge is coded in a distributed format, and cognition is mediated by non-symbolic computations. These claims have long been debated within cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks le...

  12. Resource depletion promotes automatic processing: implications for distribution of practice.

    Science.gov (United States)

    Scheel, Matthew H

    2010-12-01

    Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.

  13. Listening to professional voices: draft 2 of the ACM code of ethics and professional conduct

    OpenAIRE

    Flick, Catherine; Brinkman, Bo; Gotterbarn, D. W.; Miller, Keith; Vazansky, Kate; Wolf, Marty J.

    2017-01-01

    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. For the first time since 1992, the ACM Code of Ethics and Professional Conduct (the Code) is being updated. The Code Update Task Force in conjunction with the Committee on Professional Ethics is seeking advice from ACM members on the update. We indicated many of the motivations for changing the Code when we shared Draft 1 of Code 2018 with the ...

  14. Theoretical interpretation of the nuclear structure of 88Se within the ACM and the QPM models.

    Science.gov (United States)

    Gratchev, I. N.; Thiamova, G.; Alexa, P.; Simpson, G. S.; Ramdhane, M.

    2018-02-01

    The four-parameter algebraic collective model (ACM) Hamiltonian is used to describe the nuclear structure of 88Se. It is shown that the ACM is capable of providing a reasonable description of the excitation energies and relative positions of the ground-state band and γ band. The most probable interpretation of the nuclear structure of 88Se is that of a transitional nucleus. The Quasiparticle-plus-Phonon Model (QPM) was also applied to describe the nuclear motion in 88Se. Preliminarily calculations show that the collectivity of second excited state {2}2+ is weak and that this state contains a strong two-quasiparticle component.

  15. A Technical Survey on Optimization of Processing Geo Distributed Data

    Science.gov (United States)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  16. Characterization of a novel arginine catabolic mobile element (ACME) and staphylococcal chromosomal cassette mec composite island with significant homology to Staphylococcus epidermidis ACME type II in methicillin-resistant Staphylococcus aureus genotype ST22-MRSA-IV.

    LENUS (Irish Health Repository)

    Shore, Anna C

    2011-05-01

    The arginine catabolic mobile element (ACME) is prevalent among methicillin-resistant Staphylococcus aureus (MRSA) isolates of sequence type 8 (ST8) and staphylococcal chromosomal cassette mec (SCCmec) type IVa (USA300) (ST8-MRSA-IVa isolates), and evidence suggests that ACME enhances the ability of ST8-MRSA-IVa to grow and survive on its host. ACME has been identified in a small number of isolates belonging to other MRSA clones but is widespread among coagulase-negative staphylococci (CoNS). This study reports the first description of ACME in two distinct strains of the pandemic ST22-MRSA-IV clone. A total of 238 MRSA isolates recovered in Ireland between 1971 and 2008 were investigated for ACME using a DNA microarray. Twenty-three isolates (9.7%) were ACME positive, and all were either MRSA genotype ST8-MRSA-IVa (7\\/23, 30%) or MRSA genotype ST22-MRSA-IV (16\\/23, 70%). Whole-genome sequencing and comprehensive molecular characterization revealed the presence of a novel 46-kb ACME and staphylococcal chromosomal cassette mec (SCCmec) composite island (ACME\\/SCCmec-CI) in ST22-MRSA-IVh isolates (n=15). This ACME\\/SCCmec-CI consists of a 12-kb DNA region previously identified in ACME type II in S. epidermidis ATCC 12228, a truncated copy of the J1 region of SCCmec type I, and a complete SCCmec type IVh element. The composite island has a novel genetic organization, with ACME located within orfX and SCCmec located downstream of ACME. One PVL locus-positive ST22-MRSA-IVa isolate carried ACME located downstream of SCCmec type IVa, as previously described in ST8-MRSA-IVa. These results suggest that ACME has been acquired by ST22-MRSA-IV on two independent occasions. At least one of these instances may have involved horizontal transfer and recombination events between MRSA and CoNS. The presence of ACME may enhance dissemination of ST22-MRSA-IV, an already successful MRSA clone.

  17. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  18. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  19. Multilingual Information Discovery and AccesS (MIDAS): A Joint ACM DL'99/ ACM SIGIR'99 Workshop.

    Science.gov (United States)

    Oard, Douglas; Peters, Carol; Ruiz, Miguel; Frederking, Robert; Klavans, Judith; Sheridan, Paraic

    1999-01-01

    Discusses a multidisciplinary workshop that addressed issues concerning internationally distributed information networks. Highlights include multilingual information access in media other than character-coded text; cross-language information retrieval and multilingual metadata; and evaluation of multilingual systems. (LRW)

  20. Benchmarking Distributed Stream Processing Platforms for IoT Applications

    OpenAIRE

    Shukla, Anshu; Simmhan, Yogesh

    2016-01-01

    Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, envi- ronmental and human systems in real-time. The inherent closed-loop re- sponsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing plat- forms. Distributed Stream Processing Systems (DSPS) are becoming es- sential components of any IoT stack, but the efficacy and performance of contemporary DSP...

  1. Particularities of Verification Processes for Distributed Informatics Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2013-01-01

    Full Text Available This paper presents distributed informatics applications and characteristics of their development cycle. It defines the concept of verification and there are identified the differences from software testing. Particularities of the software testing and software verification processes are described. The verification steps and necessary conditions are presented and there are established influence factors of quality verification. Software optimality verification is analyzed and some metrics are defined for the verification process.

  2. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  3. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  4. Just-in-time Data Distribution for Analytical Query Processing

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); M.L. Kersten (Martin); F.E. Groffen (Fabian)

    2012-01-01

    textabstract Distributed processing commonly requires data spread across machines using a priori static or hash-based data allocation. In this paper, we explore an alternative approach that starts from a master node in control of the complete database, and a variable number of worker nodes

  5. Distributed Iterative Processing for Interference Channels with Receiver Cooperation

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Manchón, Carles Navarro; Bota, Vasile

    2012-01-01

    We propose a method for the design and evaluation of distributed iterative algorithms for receiver cooperation in interference-limited wireless systems. Our approach views the processing within and collaboration between receivers as the solution to an inference problem in the probabilistic model...

  6. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  7. Hypofractionated stereotactic radiotherapy (HFSRT) for who grade I anterior clinoid meningiomas (ACM).

    Science.gov (United States)

    Demiral, Selcuk; Dincoglan, Ferrat; Sager, Omer; Gamsiz, Hakan; Uysal, Bora; Gundem, Esin; Elcim, Yelda; Dirican, Bahar; Beyzadeoglu, Murat

    2016-11-01

    While microsurgical resection plays a central role in the management of ACMs, extensive surgery may be associated with substantial morbidity particularly for tumors in intimate association with critical structures. In this study, we evaluated the use of HFSRT in the management of ACM. A total of 22 patients with ACM were treated using HFSRT. Frameless image guided volumetric modulated arc therapy (VMAT) was performed with a 6 MV linear accelerator (LINAC). The total dose was 25 Gy delivered in five fractions over five consecutive treatment days. Local control (LC) and progression free survival (PFS) rates were calculated using the Kaplan-Meier method. Common Terminology Criteria for Adverse Events, version 4.0 was used in toxicity grading. Out of the total 22 patients, outcomes of 19 patients with at least 36 months of periodic follow-up were assessed. Median patient age was 40 years old (range 24-77 years old). Median follow-up time was 53 months (range 36-63 months). LC and PFS rates were 100 and 89.4 % at 1 and 3 years, respectively. Only two patients (10.5 %) experienced clinical deterioration during the follow-up period. LINAC-based HFSRT offers high rates of LC and PFS for patients with ACMs.

  8. Proceedings of the 16th ACM SIGPLAN international conference on Functional programming

    DEFF Research Database (Denmark)

    Danvy, Olivier

    Welcome to the 16th ACM SIGPLAN International Conference on Functional Programming -- ICFP'11. The picture, on the front cover, is of Mount Fuji, seen from the 20th floor of the National Institute of Informatics (NII). It was taken by Sebastian Fischer in January 2011. In Japanese, the characters...

  9. Searching Spontaneous Conversational Speech. Proceedings of ACM SIGIR Workshop (SSCS2008)

    NARCIS (Netherlands)

    Köhler, J.; Larson, M; de Jong, Franciska M.G.; Ordelman, Roeland J.F.; Kraaij, W.

    2008-01-01

    The second workshop on Searching Spontaneous Conversational Speech (SSCS 2008) was held in Singapore on July 24, 2008 in conjunction with the 31st Annual International ACM SIGIR Conference. The goal of the workshop was to bring the speech community and the information retrieval community together.

  10. Validation of the Adolescent Concerns Measure (ACM): Evidence from Exploratory and Confirmatory Factor Analysis

    Science.gov (United States)

    Ang, Rebecca P.; Chong, Wan Har; Huan, Vivien S.; Yeo, Lay See

    2007-01-01

    This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer…

  11. ACME: A scalable parallel system for extracting frequent patterns from a very long sequence

    KAUST Repository

    Sahli, Majed; Mansour, Essam; Kalnis, Panos

    2014-01-01

    -long sequences and is the first to support supermaximal motifs. ACME is a versatile parallel system that can be deployed on desktop multi-core systems, or on thousands of CPUs in the cloud. However, merely using more compute nodes does not guarantee efficiency

  12. ACME: A scalable parallel system for extracting frequent patterns from a very long sequence

    KAUST Repository

    Sahli, Majed

    2014-10-02

    Modern applications, including bioinformatics, time series, and web log analysis, require the extraction of frequent patterns, called motifs, from one very long (i.e., several gigabytes) sequence. Existing approaches are either heuristics that are error-prone, or exact (also called combinatorial) methods that are extremely slow, therefore, applicable only to very small sequences (i.e., in the order of megabytes). This paper presents ACME, a combinatorial approach that scales to gigabyte-long sequences and is the first to support supermaximal motifs. ACME is a versatile parallel system that can be deployed on desktop multi-core systems, or on thousands of CPUs in the cloud. However, merely using more compute nodes does not guarantee efficiency, because of the related overheads. To this end, ACME introduces an automatic tuning mechanism that suggests the appropriate number of CPUs to utilize, in order to meet the user constraints in terms of run time, while minimizing the financial cost of cloud resources. Our experiments show that, compared to the state of the art, ACME supports three orders of magnitude longer sequences (e.g., DNA for the entire human genome); handles large alphabets (e.g., English alphabet for Wikipedia); scales out to 16,384 CPUs on a supercomputer; and supports elastic deployment in the cloud.

  13. Microstructure and chemical bonding of DLC films deposited on ACM rubber by PACVD

    NARCIS (Netherlands)

    Martinez-Martinez, D.; Schenkel, M.; Pei, Y.T.; Sánchez-López, J.C.; Hosson, J.Th.M. De

    2011-01-01

    The microstructure and chemical bonding of DLC films prepared by plasma assisted chemical vapor deposition on acrylic rubber (ACM) are studied in this paper. The temperature variation produced by the ion impingement during plasma cleaning and subsequent film deposition was used to modify the film

  14. ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0

    International Nuclear Information System (INIS)

    BROWN, KEVIN H.; SUMMERS, RANDALL M.; GLASS, MICHEAL W.; GULLERUD, ARNE S.; HEINSTEIN, MARTIN W.; JONES, REESE E.

    2001-01-01

    An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library

  15. Extra-Margins in ACM's Adjusted NMa ‘Mortgage-Rate-Calculation Method

    NARCIS (Netherlands)

    Dijkstra, M.; Schinkel, M.P.

    2013-01-01

    We analyse the development since 2004 of our concept of extra-margins on Dutch mortgages (Dijkstra & Schinkel, 2012), based on funding cost estimations in ACM (2013), which are an update of those in NMa (2011). Neither costs related to increased mortgage-specific risks, nor the inclusion of Basel

  16. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  17. Distributed quantum information processing via quantum dot spins

    International Nuclear Information System (INIS)

    Jun, Liu; Qiong, Wang; Le-Man, Kuang; Hao-Sheng, Zeng

    2010-01-01

    We propose a scheme to engineer a non-local two-qubit phase gate between two remote quantum-dot spins. Along with one-qubit local operations, one can in principal perform various types of distributed quantum information processing. The scheme employs a photon with linearly polarisation interacting one after the other with two remote quantum-dot spins in cavities. Due to the optical spin selection rule, the photon obtains a Faraday rotation after the interaction process. By measuring the polarisation of the final output photon, a non-local two-qubit phase gate between the two remote quantum-dot spins is constituted. Our scheme may has very important applications in the distributed quantum information processing

  18. Radar data processing using a distributed computational system

    Science.gov (United States)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  19. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  20. Distribution of Selected Trace Elements in the Bayer Process

    Directory of Open Access Journals (Sweden)

    Johannes Vind

    2018-05-01

    Full Text Available The aim of this work was to achieve an understanding of the distribution of selected bauxite trace elements (gallium (Ga, vanadium (V, arsenic (As, chromium (Cr, rare earth elements (REEs, scandium (Sc in the Bayer process. The assessment was designed as a case study in an alumina plant in operation to provide an overview of the trace elements behaviour in an actual industrial setup. A combination of analytical techniques was used, mainly inductively coupled plasma mass spectrometry and optical emission spectroscopy as well as instrumental neutron activation analysis. It was found that Ga, V and As as well as, to a minor extent, Cr are principally accumulated in Bayer process liquors. In addition, Ga is also fractionated to alumina at the end of the Bayer processing cycle. The rest of these elements pass to bauxite residue. REEs and Sc have the tendency to remain practically unaffected in the solid phases of the Bayer process and, therefore, at least 98% of their mass is transferred to bauxite residue. The interest in such a study originates from the fact that many of these trace constituents of bauxite ore could potentially become valuable by-products of the Bayer process; therefore, the understanding of their behaviour needs to be expanded. In fact, Ga and V are already by-products of the Bayer process, but their distribution patterns have not been provided in the existing open literature.

  1. Compiling software for a hierarchical distributed processing system

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  2. An Effective Framework for Distributed Geospatial Query Processing in Grids

    Directory of Open Access Journals (Sweden)

    CHEN, B.

    2010-08-01

    Full Text Available The emergence of Internet has greatly revolutionized the way that geospatial information is collected, managed, processed and integrated. There are several important research issues to be addressed for distributed geospatial applications. First, the performance of geospatial applications is needed to be considered in the Internet environment. In this regard, the Grid as an effective distributed computing paradigm is a good choice. The Grid uses a series of middleware to interconnect and merge various distributed resources into a super-computer with capability of high performance computation. Secondly, it is necessary to ensure the secure use of independent geospatial applications in the Internet environment. The Grid just provides the utility of secure access to distributed geospatial resources. Additionally, it makes good sense to overcome the heterogeneity between individual geospatial information systems in Internet. The Open Geospatial Consortium (OGC proposes a number of generalized geospatial standards e.g. OGC Web Services (OWS to achieve interoperable access to geospatial applications. The OWS solution is feasible and widely adopted by both the academic community and the industry community. Therefore, we propose an integrated framework by incorporating OWS standards into Grids. Upon the framework distributed geospatial queries can be performed in an interoperable, high-performance and secure Grid environment.

  3. Distributed Sensing and Processing Adaptive Collaboration Environment (D-SPACE)

    Science.gov (United States)

    2014-07-01

    RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL/RI 11. SPONSOR/MONITOR’S REPORT NUMBER AFRL-RI-RS-TR-2014-195 12...cloud” technologies are not appropriate for situation understanding in areas of denial, where computation resources are limited, data not easily...graph matching process. D-SPACE distributes graph exploitation among a network of autonomous computational resources, designs the collaboration policy

  4. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    Science.gov (United States)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  5. Syntactic processing is distributed across the language system.

    Science.gov (United States)

    Blank, Idan; Balewski, Zuzanna; Mahowald, Kyle; Fedorenko, Evelina

    2016-02-15

    Language comprehension recruits an extended set of regions in the human brain. Is syntactic processing localized to a particular region or regions within this system, or is it distributed across the entire ensemble of brain regions that support high-level linguistic processing? Evidence from aphasic patients is more consistent with the latter possibility: damage to many different language regions and to white-matter tracts connecting them has been shown to lead to similar syntactic comprehension deficits. However, brain imaging investigations of syntactic processing continue to focus on particular regions within the language system, often parts of Broca's area and regions in the posterior temporal cortex. We hypothesized that, whereas the entire language system is in fact sensitive to syntactic complexity, the effects in some regions may be difficult to detect because of the overall lower response to language stimuli. Using an individual-subjects approach to localizing the language system, shown in prior work to be more sensitive than traditional group analyses, we indeed find responses to syntactic complexity throughout this system, consistent with the findings from the neuropsychological patient literature. We speculate that such distributed nature of syntactic processing could perhaps imply that syntax is inseparable from other aspects of language comprehension (e.g., lexico-semantic processing), in line with current linguistic and psycholinguistic theories and evidence. Neuroimaging investigations of syntactic processing thus need to expand their scope to include the entire system of high-level language processing regions in order to fully understand how syntax is instantiated in the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. An Educational Tool for Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2011-01-01

    In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation of the abs......In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...... of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, user and multiuser interaction, which can hardly be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples show how to implement interactive...

  7. Flexible distributed architecture for semiconductor process control and experimentation

    Science.gov (United States)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  8. Heat and work distributions for mixed Gauss–Cauchy process

    International Nuclear Information System (INIS)

    Kuśmierz, Łukasz; Gudowska-Nowak, Ewa; Rubi, J Miguel

    2014-01-01

    We analyze energetics of a non-Gaussian process described by a stochastic differential equation of the Langevin type. The process represents a paradigmatic model of a nonequilibrium system subject to thermal fluctuations and additional external noise, with both sources of perturbations considered as additive and statistically independent forcings. We define thermodynamic quantities for trajectories of the process and analyze contributions to mechanical work and heat. As a working example we consider a particle subjected to a drag force and two statistically independent Lévy white noises with stability indices α = 2 and α = 1. The fluctuations of dissipated energy (heat) and distribution of work performed by the force acting on the system are addressed by examining contributions of Cauchy fluctuations (α = 1) to either bath or external force acting on the system. (paper)

  9. Learning from the History of Distributed Query Processing

    DEFF Research Database (Denmark)

    Betz, Heiko; Gropengießer, Francis; Hose, Katja

    2012-01-01

    The vision of the Semantic Web has triggered the development of various new applications and opened up new directions in research. Recently, much effort has been put into the development of techniques for query processing over Linked Data. Being based upon techniques originally developed...... for distributed and federated databases, some of them inherit the same or similar problems. Thus, the goal of this paper is to point out pitfalls that the previous generation of researchers has already encountered and to introduce the Linked Data as a Service as an idea that has the potential to solve the problem...... in some scenarios. Hence, this paper discusses nine theses about Linked Data processing and sketches a research agenda for future endeavors in the area of Linked Data processing....

  10. An educational tool for interactive parallel and distributed processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2012-01-01

    In this article we try to describe how the modular interactive tiles system (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing a handson educational tool that allows a change in the representation...... of abstract problems related to designing interactive parallel and distributed systems. Indeed, the MITS seems to bring a series of goals into education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, and user and multi-user interaction which can rarely be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples we show how to implement...

  11. Parallel Distributed Processing Theory in the Age of Deep Networks.

    Science.gov (United States)

    Bowers, Jeffrey S

    2017-12-01

    Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.

  12. Effects of activated ACM on expression of signal transducers in cerebral cortical neurons of rats.

    Science.gov (United States)

    Wang, Xiaojing; Li, Zhengli; Zhu, Changgeng; Li, Zhongyu

    2007-06-01

    To explore the roles of astrocytes in the epileptogenesis, astrocytes and neurons were isolated, purified and cultured in vitro from cerebral cortex of rats. The astrocytes were activated by ciliary neurotrophic factor (CNTF) and astrocytic conditioned medium (ACM) was collected to treat neurons for 4, 8 and 12 h. By using Western blot, the expression of calmodulin dependent protein kinase II (CaMK II), inducible nitric oxide synthase (iNOS) and adenylate cyclase (AC) was detected in neurons. The results showed that the expression of CaMK II, iNOS and AC was increased significantly in the neurons treated with ACM from 4 h to 12 h (PACM and such signal pathways as NOS-NO-cGMP, Ca2+/CaM-CaMK II and AC-cAMP-PKA might take part in the signal transduction of epileptogenesis.

  13. ARM Airborne Carbon Measurements (ARM-ACME) and ARM-ACME 2.5 Final Campaign Reports

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tom, M. S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sweeney, C. [NOAA Earth Systems Research Lab., Boulder, CO (United States)

    2016-01-01

    We report on a 5-year multi-institution and multi-agency airborne study of atmospheric composition and carbon cycling at the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) site, with scientific objectives that are central to the carbon-cycle and radiative-forcing goals of the U.S. Global Change Research Program and the North American Carbon Program (NACP). The goal of these measurements is to improve understanding of 1) the carbon exchange of the Atmospheric Radiation Measurement (ARM) SGP region; 2) how CO2 and associated water and energy fluxes influence radiative-forcing, convective processes, and CO2 concentrations over the ARM SGP region, and 3) how greenhouse gases are transported on continental scales.

  14. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    Energy Technology Data Exchange (ETDEWEB)

    Dobay, M. P. D., E-mail: maria.pamela.david@physik.uni-muenchen.de; Alberola, A. Piera; Mendoza, E. R.; Raedler, J. O., E-mail: joachim.raedler@physik.uni-muenchen.de [Ludwig-Maximilians University, Faculty of Physics, Center for NanoScience (Germany)

    2012-03-15

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  15. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    International Nuclear Information System (INIS)

    Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.

    2012-01-01

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  16. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    Science.gov (United States)

    Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.

    2012-03-01

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  17. IPNS distributed-processing data-acquisition system

    International Nuclear Information System (INIS)

    Haumann, J.R.; Daly, R.T.; Worlton, T.G.; Crawford, R.K.

    1981-01-01

    The Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory is a major new user-oriented facility which has come on line for basic research in neutron scattering and neutron radiation damage. This paper describes the distributed-processing data-acquisition system which handles data collection and instrument control for the time-of-flight neutron-scattering instruments. The topics covered include the overall system configuration, each of the computer subsystems, communication protocols linking each computer subsystem, and an overview of the software which has been developed

  18. A Coordinated Initialization Process for the Distributed Space Exploration Simulation

    Science.gov (United States)

    Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David

    2007-01-01

    A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions

  19. The CANDU 9 distributed control system design process

    International Nuclear Information System (INIS)

    Harber, J.E.; Kattan, M.K.; Macbeth, M.J.

    1997-01-01

    Canadian designed CANDU pressurized heavy water nuclear reactors have been world leaders in electrical power generation. The CANDU 9 project is AECL's next reactor design. Plant control for the CANDU 9 station design is performed by a distributed control system (DCS) as compared to centralized control computers, analog control devices and relay logic used in previous CANDU designs. The selection of a DCS as the platform to perform the process control functions and most of the data acquisition of the plant, is consistent with the evolutionary nature of the CANDU technology. The control strategies for the DCS control programs are based on previous CANDU designs but are implemented on a new hardware platform taking advantage of advances in computer technology. This paper describes the design process for developing the CANDU 9 DCS. Various design activities, prototyping and analyses have been undertaken in order to ensure a safe, functional, and cost-effective design. (author)

  20. Design and simulation for real-time distributed processing systems

    International Nuclear Information System (INIS)

    Legrand, I.C.; Gellrich, A.; Gensah, U.; Leich, H.; Wegner, P.

    1996-01-01

    The aim of this work is to provide a proper framework for the simulation and the optimization of the event building, the on-line third level trigger, and complete event reconstruction processor farm for the future HERA-B experiment. A discrete event, process oriented, simulation developed in concurrent μC++ is used for modelling the farm nodes running with multi-tasking constraints and different types of switching elements and digital signal processors interconnected for distributing the data through the system. An adequate graphic interface to the simulation part which allows to monitor features on-line and to analyze trace files, provides a powerful development tool for evaluating and designing parallel processing architectures. Control software and data flow protocols for event building and dynamic processor allocation are presented for two architectural models. (author)

  1. Software/hardware distributed processing network supporting the Ada environment

    Science.gov (United States)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  2. Model Diagnostics for the Department of Energy's Accelerated Climate Modeling for Energy (ACME) Project

    Science.gov (United States)

    Smith, B.

    2015-12-01

    In 2014, eight Department of Energy (DOE) national laboratories, four academic institutions, one company, and the National Centre for Atmospheric Research combined forces in a project called Accelerated Climate Modeling for Energy (ACME) with the goal to speed Earth system model development for climate and energy. Over the planned 10-year span, the project will conduct simulations and modeling on DOE's most powerful high-performance computing systems at Oak Ridge, Argonne, and Lawrence Berkeley Leadership Compute Facilities. A key component of the ACME project is the development of an interactive test bed for the advanced Earth system model. Its execution infrastructure will accelerate model development and testing cycles. The ACME Workflow Group is leading the efforts to automate labor-intensive tasks, provide intelligent support for complex tasks and reduce duplication of effort through collaboration support. As part of this new workflow environment, we have created a diagnostic, metric, and intercomparison Python framework, called UVCMetrics, to aid in the testing-to-production execution of the ACME model. The framework exploits similarities among different diagnostics to compactly support diagnosis of new models. It presently focuses on atmosphere and land but is designed to support ocean and sea ice model components as well. This framework is built on top of the existing open-source software framework known as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT). Because of its flexible framework design, scientists and modelers now can generate thousands of possible diagnostic outputs. These diagnostics can compare model runs, compare model vs. observation, or simply verify a model is physically realistic. Additional diagnostics are easily integrated into the framework, and our users have already added several. Diagnostics can be generated, viewed, and manipulated from the UV-CDAT graphical user interface, Python command line scripts and programs

  3. Distributed and cooperative task processing: Cournot oligopolies on a graph.

    Science.gov (United States)

    Pavlic, Theodore P; Passino, Kevin M

    2014-06-01

    This paper introduces a novel framework for the design of distributed agents that must complete externally generated tasks but also can volunteer to process tasks encountered by other agents. To reduce the computational and communication burden of coordination between agents to perfectly balance load around the network, the agents adjust their volunteering propensity asynchronously within a fictitious trading economy. This economy provides incentives for nontrivial levels of volunteering for remote tasks, and thus load is shared. Moreover, the combined effects of diminishing marginal returns and network topology lead to competitive equilibria that have task reallocations that are qualitatively similar to what is expected in a load-balancing system with explicit coordination between nodes. In the paper, topological and algorithmic conditions are given that ensure the existence and uniqueness of a competitive equilibrium. Additionally, a decentralized distributed gradient-ascent algorithm is given that is guaranteed to converge to this equilibrium while not causing any node to over-volunteer beyond its maximum task-processing rate. The framework is applied to an autonomous-air-vehicle example, and connections are drawn to classic studies of the evolution of cooperation in nature.

  4. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  5. Application of signal processing techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Raza, Safdar; Mokhlis, Hazlie; Arof, Hamzah; Laghari, J.A.; Wang, Li

    2015-01-01

    Highlights: • Pros & cons of conventional islanding detection techniques (IDTs) are discussed. • Signal processing techniques (SPTs) ability in detecting islanding is discussed. • SPTs ability in improving performance of passive techniques are discussed. • Fourier, s-transform, wavelet, HHT & tt-transform based IDTs are reviewed. • Intelligent classifiers (ANN, ANFIS, Fuzzy, SVM) application in SPT are discussed. - Abstract: High penetration of distributed generation resources (DGR) in distribution network provides many benefits in terms of high power quality, efficiency, and low carbon emissions in power system. However, efficient islanding detection and immediate disconnection of DGR is critical in order to avoid equipment damage, grid protection interference, and personnel safety hazards. Islanding detection techniques are mainly classified into remote, passive, active, and hybrid techniques. From these, passive techniques are more advantageous due to lower power quality degradation, lower cost, and widespread usage by power utilities. However, the main limitations of these techniques are that they possess a large non detection zones and require threshold setting. Various signal processing techniques and intelligent classifiers have been used to overcome the limitations of passive islanding. Signal processing techniques, in particular, are adopted due to their versatility, stability, cost effectiveness, and ease of modification. This paper presents a comprehensive overview of signal processing techniques used to improve common passive islanding detection techniques. A performance comparison between the signal processing based islanding detection techniques with existing techniques are also provided. Finally, this paper outlines the relative advantages and limitations of the signal processing techniques in order to provide basic guidelines for researchers and field engineers in determining the best method for their system

  6. Molecular characteristics of clinical methicillin-resistant Staphylococcus pseudintermedius harboring arginine catabolic mobile element (ACME) from dogs and cats.

    Science.gov (United States)

    Yang, Ching; Wan, Min-Tao; Lauderdale, Tsai-Ling; Yeh, Kuang-Sheng; Chen, Charles; Hsiao, Yun-Hsia; Chou, Chin-Cheng

    2017-06-01

    This study aimed to investigate the presence of arginine catabolic mobile element (ACME) and its associated molecular characteristics in methicillin-resistant Staphylococcus pseudintermedius (MRSP). Among the 72 S. pseudintermedius recovered from various infection sites of dogs and cats, 52 (72.2%) were MRSP. ACME-arcA was detected commonly (69.2%) in these MRSP isolates, and was more frequently detected in those from the skin than from other body sites (P=0.047). There was a wide genetic diversity among the ACME-arcA-positive MRSP isolates, which comprised three SCCmec types (II-III, III and V) and 15 dru types with two predominant clusters (9a and 11a). Most MRSP isolates were multidrug-resistant. Since S. pseudintermedius could serve as a reservoir of ACME, further research on this putative virulence factor is recommended. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  8. Timely activation of budding yeast APCCdh1 involves degradation of its inhibitor, Acm1, by an unconventional proteolytic mechanism.

    Directory of Open Access Journals (Sweden)

    Michael Melesse

    Full Text Available Regulated proteolysis mediated by the ubiquitin proteasome system is a fundamental and essential feature of the eukaryotic cell division cycle. Most proteins with cell cycle-regulated stability are targeted for degradation by one of two related ubiquitin ligases, the Skp1-cullin-F box protein (SCF complex or the anaphase-promoting complex (APC. Here we describe an unconventional cell cycle-regulated proteolytic mechanism that acts on the Acm1 protein, an inhibitor of the APC activator Cdh1 in budding yeast. Although Acm1 can be recognized as a substrate by the Cdc20-activated APC (APCCdc20 in anaphase, APCCdc20 is neither necessary nor sufficient for complete Acm1 degradation at the end of mitosis. An APC-independent, but 26S proteasome-dependent, mechanism is sufficient for complete Acm1 clearance from late mitotic and G1 cells. Surprisingly, this mechanism appears distinct from the canonical ubiquitin targeting pathway, exhibiting several features of ubiquitin-independent proteasomal degradation. For example, Acm1 degradation in G1 requires neither lysine residues in Acm1 nor assembly of polyubiquitin chains. Acm1 was stabilized though by conditional inactivation of the ubiquitin activating enzyme Uba1, implying some requirement for the ubiquitin pathway, either direct or indirect. We identified an amino terminal predicted disordered region in Acm1 that contributes to its proteolysis in G1. Although ubiquitin-independent proteasome substrates have been described, Acm1 appears unique in that its sensitivity to this mechanism is strictly cell cycle-regulated via cyclin-dependent kinase (Cdk phosphorylation. As a result, Acm1 expression is limited to the cell cycle window in which Cdk is active. We provide evidence that failure to eliminate Acm1 impairs activation of APCCdh1 at mitotic exit, justifying its strict regulation by cell cycle-dependent transcription and proteolytic mechanisms. Importantly, our results reveal that strict cell

  9. Timely Activation of Budding Yeast APCCdh1 Involves Degradation of Its Inhibitor, Acm1, by an Unconventional Proteolytic Mechanism

    Science.gov (United States)

    Melesse, Michael; Choi, Eunyoung; Hall, Hana; Walsh, Michael J.; Geer, M. Ariel; Hall, Mark C.

    2014-01-01

    Regulated proteolysis mediated by the ubiquitin proteasome system is a fundamental and essential feature of the eukaryotic cell division cycle. Most proteins with cell cycle-regulated stability are targeted for degradation by one of two related ubiquitin ligases, the Skp1-cullin-F box protein (SCF) complex or the anaphase-promoting complex (APC). Here we describe an unconventional cell cycle-regulated proteolytic mechanism that acts on the Acm1 protein, an inhibitor of the APC activator Cdh1 in budding yeast. Although Acm1 can be recognized as a substrate by the Cdc20-activated APC (APCCdc20) in anaphase, APCCdc20 is neither necessary nor sufficient for complete Acm1 degradation at the end of mitosis. An APC-independent, but 26S proteasome-dependent, mechanism is sufficient for complete Acm1 clearance from late mitotic and G1 cells. Surprisingly, this mechanism appears distinct from the canonical ubiquitin targeting pathway, exhibiting several features of ubiquitin-independent proteasomal degradation. For example, Acm1 degradation in G1 requires neither lysine residues in Acm1 nor assembly of polyubiquitin chains. Acm1 was stabilized though by conditional inactivation of the ubiquitin activating enzyme Uba1, implying some requirement for the ubiquitin pathway, either direct or indirect. We identified an amino terminal predicted disordered region in Acm1 that contributes to its proteolysis in G1. Although ubiquitin-independent proteasome substrates have been described, Acm1 appears unique in that its sensitivity to this mechanism is strictly cell cycle-regulated via cyclin-dependent kinase (Cdk) phosphorylation. As a result, Acm1 expression is limited to the cell cycle window in which Cdk is active. We provide evidence that failure to eliminate Acm1 impairs activation of APCCdh1 at mitotic exit, justifying its strict regulation by cell cycle-dependent transcription and proteolytic mechanisms. Importantly, our results reveal that strict cell-cycle expression profiles

  10. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    Science.gov (United States)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust

  11. Extending the LWS Data Environment: Distributed Data Processing and Analysis

    Science.gov (United States)

    Narock, Thomas

    2005-01-01

    The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data

  12. Calculation of the spallation product distribution in the evaporation process

    International Nuclear Information System (INIS)

    Nishida, T.; Kanno, I.; Nakahara, Y.; Takada, H.

    1989-01-01

    Some investigations are performed for the calculational model of nuclear spallation reaction in the evaporation process. A new version of a spallation reaction simulation code NUCLEUS has been developed by incorporating the newly revised Uno ampersand Yamada's mass formula and extending the counting region of produced nuclei. The differences between the new and original mass formulas are shown in the comparisons of mass excess values. The distributions of spallation products of a uranium target nucleus bombarded by energy (0.38 - 2.9 GeV) protons have been calculated with the new and original versions of NUCLEUS. In the fission component Uno ampersand Yamada's mass formula reproduces the measured data obtained from thin foil experiments significantly better, especially in the neutron excess side, than the combination of the Cameron's mass formula and the mass table compiled by Wapstra, et al., in the original version of NUCLEUS. Discussions are also made on how the mass-yield distribution of products varies dependent on the level density parameter a characterizing the particle evaporation. 16 refs., 7 figs., 1 tab

  13. Calculation of the spallation product distribution in the evaporation process

    International Nuclear Information System (INIS)

    Nishida, T.; Kanno, I.; Nakahara, Y.; Takada, H.

    1989-01-01

    Some investigations are performed for the calculational model of nuclear spallation reaction in the evaporation process. A new version of a spallation reaction simulation code NUCLEUS has been developed by incorporating the newly revised Uno and Yamada's mass formula and extending the counting region of produced nuclei. The differences between the new and original mass formulas are shown in the comparisons of mass excess values. The distributions of spallation products of a uranium target nucleus bombarded by energy (0.38 - 2.9 GeV) protons have been calculated with the new and original versions of NUCLEUS. In the fission component Uno and Yamada's mass formula reproduces the measured data obtained from thin foil experiments significantly better, especially in the neutron excess side, than the combination of the Cameron's mass formula and the mass table compiled by Wapstra, et al., in the original version of NUCLEUS. Discussions are also made on how the mass-yield distribution of products varies dependent on the level density parameter α characterizing the particle evaporation. (author)

  14. Fuel distribution process risk analysis in East Borneo

    Directory of Open Access Journals (Sweden)

    Laksmita Raizsa

    2018-01-01

    Full Text Available Fuel distribution is an important aspect of fulfilling the customer’s need. It is risky because it can cause tardiness that can cause fuel scarcity. In the process of distribution, many risks are occurring. House of Risk is a method used for mitigating the risk. It identifies seven risk events and nine risk agents. Matrix occurrence and severity are used for eliminating the minor impact risk. House of Risk 1 is used for determining the Aggregate Risk Potential (ARP. Pareto diagram is applied to prioritize risk that must be mitigated by preventive actions based on ARP. It identifies 4 priority risks, namely A8 (Car trouble, A4 (Human Error, A3 (Error deposit via bank and underpayment, and A6 (traffic accident which should be mitigated. House of Risk 2 makes for mapping between the preventive action and risk agent. It gets the Effectiveness to Difficulty Ratio (ETD for mitigating action. Conducting safety talk routine once every three days with ETD 2088 is the primary preventive actions.

  15. Quantifying evenly distributed states in exclusion and nonexclusion processes

    Science.gov (United States)

    Binder, Benjamin J.; Landman, Kerry A.

    2011-04-01

    Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Pólya distribution. New measures are developed which indicate whether or not a spatial data set, generated from an exclusion process, is at its most evenly distributed state, the complete spatial randomness (CSR) state. To this end, we define an index in terms of the variance between the bin counts. Limiting values of the index are determined when objects have access to the entire domain and when there are subregions of the domain that are inaccessible to objects. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and verify that our theoretical CSR limit accurately predicts the state of the system. These measures should prove useful in many biological applications.

  16. Radon: Chemical and physical processes associated with its distribution

    International Nuclear Information System (INIS)

    Castleman, A.W. Jr.

    1992-01-01

    Assessing the mechanisms which govern the distribution, fate, and pathways of entry into biological systems, as well as the ultimate hazards associated with the radon progeny and their secondary reaction products, depends on knowledge of their chemistry. Our studies are directed toward developing fundamental information which will provide a basis for modeling studies that are requisite in obtaining a complete picture of growth, attachment to aerosols, and transport to the bioreceptor and ultimate incorporation within. Our program is divided into three major areas of research. These include measurement of the determination of their mobilities, study of the role of radon progeny ions in affecting reactions, including study of the influence of the degree of solvation (clustering), and examination of the important secondary reaction products, with particular attention to processes leading to chemical conversion of either the core ions or the ligands as a function of the degree of clustering

  17. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan

    2016-01-01

    create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which...... the garbage collection time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution time in cases without data spilling and 41.6x speedup in cases with data spilling, and 3) to consume up to 46.6% less memory.......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...

  18. Spatial Data Exploring by Satellite Image Distributed Processing

    Science.gov (United States)

    Mihon, V. D.; Colceriu, V.; Bektas, F.; Allenbach, K.; Gvilava, M.; Gorgan, D.

    2012-04-01

    Our society needs and environmental predictions encourage the applications development, oriented on supervising and analyzing different Earth Science related phenomena. Satellite images could be explored for discovering information concerning land cover, hydrology, air quality, and water and soil pollution. Spatial and environment related data could be acquired by imagery classification consisting of data mining throughout the multispectral bands. The process takes in account a large set of variables such as satellite image types (e.g. MODIS, Landsat), particular geographic area, soil composition, vegetation cover, and generally the context (e.g. clouds, snow, and season). All these specific and variable conditions require flexible tools and applications to support an optimal search for the appropriate solutions, and high power computation resources. The research concerns with experiments on solutions of using the flexible and visual descriptions of the satellite image processing over distributed infrastructures (e.g. Grid, Cloud, and GPU clusters). This presentation highlights the Grid based implementation of the GreenLand application. The GreenLand application development is based on simple, but powerful, notions of mathematical operators and workflows that are used in distributed and parallel executions over the Grid infrastructure. Currently it is used in three major case studies concerning with Istanbul geographical area, Rioni River in Georgia, and Black Sea catchment region. The GreenLand application offers a friendly user interface for viewing and editing workflows and operators. The description involves the basic operators provided by GRASS [1] library as well as many other image related operators supported by the ESIP platform [2]. The processing workflows are represented as directed graphs giving the user a fast and easy way to describe complex parallel algorithms, without having any prior knowledge of any programming language or application commands

  19. Coalescent Processes with Skewed Offspring Distributions and Nonequilibrium Demography.

    Science.gov (United States)

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Achaz, Guillaume; Jensen, Jeffrey D

    2018-01-01

    Nonequilibrium demography impacts coalescent genealogies leaving detectable, well-studied signatures of variation. However, similar genomic footprints are also expected under models of large reproductive skew, posing a serious problem when trying to make inference. Furthermore, current approaches consider only one of the two processes at a time, neglecting any genomic signal that could arise from their simultaneous effects, preventing the possibility of jointly inferring parameters relating to both offspring distribution and population history. Here, we develop an extended Moran model with exponential population growth, and demonstrate that the underlying ancestral process converges to a time-inhomogeneous psi-coalescent. However, by applying a nonlinear change of time scale-analogous to the Kingman coalescent-we find that the ancestral process can be rescaled to its time-homogeneous analog, allowing the process to be simulated quickly and efficiently. Furthermore, we derive analytical expressions for the expected site-frequency spectrum under the time-inhomogeneous psi-coalescent, and develop an approximate-likelihood framework for the joint estimation of the coalescent and growth parameters. By means of extensive simulation, we demonstrate that both can be estimated accurately from whole-genome data. In addition, not accounting for demography can lead to serious biases in the inferred coalescent model, with broad implications for genomic studies ranging from ecology to conservation biology. Finally, we use our method to analyze sequence data from Japanese sardine populations, and find evidence of high variation in individual reproductive success, but few signs of a recent demographic expansion. Copyright © 2018 by the Genetics Society of America.

  20. Experimental determination of the partitioning coefficient and volatility of important BVOC oxidation products using the Aerosol Collection Module (ACM) coupled to a PTR-ToF-MS

    Science.gov (United States)

    Gkatzelis, G.; Hohaus, T.; Tillmann, R.; Schmitt, S. H.; Yu, Z.; Schlag, P.; Wegener, R.; Kaminski, M.; Kiendler-Scharr, A.

    2015-12-01

    Atmospheric aerosol can alter the Earth's radiative budget and global climate but can also affect human health. A dominant contributor to the submicrometer particulate matter (PM) is organic aerosol (OA). OA can be either directly emitted through e.g. combustion processes (primary OA) or formed through the oxidation of organic gases (secondary organic aerosol, SOA). A detailed understanding of SOA formation is of importance as it constitutes a major contribution to the total OA. The partitioning between the gas and particle phase as well as the volatility of individual components of SOA is yet poorly understood adding uncertainties and thus complicating climate modelling. In this work, a new experimental methodology was used for compound-specific analysis of organic aerosol. The Aerosol Collection Module (ACM) is a newly developed instrument that deploys an aerodynamic lens to separate the gas and particle phase of an aerosol. The particle phase is directed to a cooled sampling surface. After collection particles are thermally desorbed and transferred to a detector for further analysis. In the present work, the ACM was coupled to a Proton Transfer Reaction-Time of Flight-Mass Spectrometer (PTR-ToF-MS) to detect and quantify organic compounds partitioning between the gas and particle phase. This experimental approach was used in a set of experiments at the atmosphere simulation chamber SAPHIR to investigate SOA formation. Ozone oxidation with subsequent photochemical aging of β-pinene, limonene and real plant emissions from Pinus sylvestris (Scots pine) were studied. Simultaneous measurement of the gas and particle phase using the ACM-PTR-ToF-MS allows to report partitioning coefficients of important BVOC oxidation products. Additionally, volatility trends and changes of the SOA with photochemical aging are investigated and compared for all systems studied.

  1. The Marginal Distributions of a Crossing Time and Renewal Numbers Related with Two Poisson Processes are as Ph-Distributions

    Directory of Open Access Journals (Sweden)

    Mir G. H. Talpur

    2006-01-01

    Full Text Available In this paper we consider, how to find the marginal distributions of crossing time and renewal numbers related with two poisson processes by using probability arguments. The obtained results show that the one-dimension marginal distributions are N+1 order PH-distributions.

  2. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  3. Stationary distributions of stochastic processes described by a linear neutral delay differential equation

    International Nuclear Information System (INIS)

    Frank, T D

    2005-01-01

    Stationary distributions of processes are derived that involve a time delay and are defined by a linear stochastic neutral delay differential equation. The distributions are Gaussian distributions. The variances of the Gaussian distributions are either monotonically increasing or decreasing functions of the time delays. The variances become infinite when fixed points of corresponding deterministic processes become unstable. (letter to the editor)

  4. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  5. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  6. In vitro simulation of distribution processes following intramuscular injection

    Directory of Open Access Journals (Sweden)

    Probst Mareike

    2016-09-01

    Full Text Available There is an urgent need for in vitro dissolution test setups for intramuscularly applied dosage forms. Especially biorelevant methods are needed to predict the in vivo behavior of newly developed dosage forms in a realistic way. There is a lack of knowledge regarding critical in vivo parameters influencing the release and absorption behavior of an intramuscularly applied drug. In the presented work the focus was set on the simulation of blood perfusion and muscle tissue. A solid agarose gel, being incorporated in an open-pored foam, was used to mimic the gel phase of muscle tissue and implemented in a flow through cell. An aqueous solution of fluorescein sodium was injected. Compared to recently obtained in vivo results the distribution of the model substance was very slow. Furthermore an agarose gel of lower viscosity an open-pored foam and phosphate buffer saline pH 7.4 were implemented in a multi-channel-ceramic membrane serving as a holder for the muscle imitating material. Blood simulating release medium was perfused through the ceramic membrane including filling materials. Transport of the dissolved fluorescein sodium was, in case of the gel, not only determined by diffusion but also by convective transport processes. The more realistic the muscle simulating materials were constituted the less reproducible results were obtained with the designed test setups.

  7. TWO NOVEL ACM (ACTIVE CONTOUR MODEL) METHODS FOR INTRAVASCULAR ULTRASOUND IMAGE SEGMENTATION

    International Nuclear Information System (INIS)

    Chen, Chi Hau; Potdat, Labhesh; Chittineni, Rakesh

    2010-01-01

    One of the attractive image segmentation methods is the Active Contour Model (ACM) which has been widely used in medical imaging as it always produces sub-regions with continuous boundaries. Intravascular ultrasound (IVUS) is a catheter based medical imaging technique which is used for quantitative assessment of atherosclerotic disease. Two methods of ACM realizations are presented in this paper. The gradient descent flow based on minimizing energy functional can be used for segmentation of IVUS images. However this local operation alone may not be adequate to work with the complex IVUS images. The first method presented consists of basically combining the local geodesic active contours and global region-based active contours. The advantage of combining the local and global operations is to allow curves deforming under the energy to find only significant local minima and delineate object borders despite noise, poor edge information and heterogeneous intensity profiles. Results for this algorithm are compared to standard techniques to demonstrate the method's robustness and accuracy. In the second method, the energy function is appropriately modified and minimized using a Hopfield neural network. Proper modifications in the definition of the bias of the neurons have been introduced to incorporate image characteristics. The method overcomes distortions in the expected image pattern, such as due to the presence of calcium, and employs a specialized structure of the neural network and boundary correction schemes which are based on a priori knowledge about the vessel geometry. The presented method is very fast and has been evaluated using sequences of IVUS frames.

  8. Clinical isolates of Enterococcus faecium exhibit strain-specific collagen binding mediated by Acm, a new member of the MSCRAMM family.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Weinstock, George M; Murray, Barbara E

    2003-03-01

    A collagen-binding adhesin of Enterococcus faecium, Acm, was identified. Acm shows 62% similarity to the Staphylococcus aureus collagen adhesin Cna over the entire protein and is more similar to Cna (60% and 75% similarity with Cna A and B domains respectively) than to the Enterococcus faecalis collagen-binding adhesin, Ace, which shares homology with Acm only in the A domain. Despite the detection of acm in 32 out of 32 E. faecium isolates, only 11 of these (all clinical isolates, including four vancomycin-resistant endocarditis isolates and seven other isolates) exhibited binding to collagen type I (CI). Although acm from three CI-binding vancomycin-resistant E. faecium clinical isolates showed 100% identity, analysis of acm genes and their promoter regions from six non-CI-binding strains identified deletions or mutations that introduced stop codons and/or IS elements within the gene or the promoter region in five out of six strains, suggesting that the presence of an intact functional acm gene is necessary for binding of E. faecium strains to CI. Recombinant Acm A domain showed specific and concentration-dependent binding to collagen, and this protein competed with E. faecium binding to immobilized CI. Consistent with the adherence phenotype and sequence data, probing with Acm-specific IgGs purified from anti-recombinant Acm A polyclonal rabbit serum confirmed the surface expression of Acm in three out of three collagen-binding clinical isolates of E. faecium tested, but in none of the strains with a non-functional pseudo acm gene. Introduction of a functional acm gene into two non-CI-binding natural acm mutant strains conferred a CI-binding phenotype, further confirming that native Acm is sufficient for the binding of E. faecium to CI. These results demonstrate that acm, which encodes a potential virulence factor, is functional only in certain infection-derived clinical isolates of E. faecium, and suggest that Acm is the primary adhesin responsible for the

  9. An Unexpected Location of the Arginine Catabolic Mobile Element (ACME) in a USA300-Related MRSA Strain

    DEFF Research Database (Denmark)

    Damkjær Bartels, Mette; Hansen, Lars H.; Boye, Kit

    2011-01-01

    In methicillin resistant Staphylococcus aureus (MRSA), the arginine catabolic mobile element (ACME) was initially described in USA300 (t008-ST8) where it is located downstream of the staphylococcal cassette chromosome mec (SCCmec). A common health-care associated MRSA in Copenhagen, Denmark (t024......-ST8) is clonally related to USA300 and is frequently PCR positive for the ACME specific arcA-gene. This study is the first to describe an ACME element upstream of the SCCmec in MRSA. By traditional SCCmec typing schemes, the SCCmec of t024-ST8 strain M1 carries SCCmec IVa, but full sequencing...... of SCCmec, M1 had two new DR between the orfX gene and the J3 region of the SCCmec. The region between the orfX DR (DR1) and DR2 contained the ccrAB4 genes. An ACME II-like element was located between DR2 and DR3. The entire 26,468 bp sequence between DR1 and DR3 was highly similar to parts of the ACME...

  10. On-resin conversion of Cys(Acm)-containing peptides to their corresponding Cys(Scm) congeners.

    Science.gov (United States)

    Mullen, Daniel G; Weigel, Benjamin; Barany, George; Distefano, Mark D

    2010-05-01

    The Acm protecting group for the thiol functionality of cysteine is removed under conditions (Hg(2+)) that are orthogonal to the acidic milieu used for global deprotection in Fmoc-based solid-phase peptide synthesis. This use of a toxic heavy metal for deprotection has limited the usefulness of Acm in peptide synthesis. The Acm group may be converted to the Scm derivative that can then be used as a reactive intermediate for unsymmetrical disulfide formation. It may also be removed by mild reductive conditions to generate unprotected cysteine. Conversion of Cys(Acm)-containing peptides to their corresponding Cys(Scm) derivatives in solution is often problematic because the sulfenyl chloride reagent used for this conversion may react with the sensitive amino acids tyrosine and tryptophan. In this protocol, we report a method for on-resin Acm to Scm conversion that allows the preparation of Cys(Scm)-containing peptides under conditions that do not modify other amino acids. (c) 2010 European Peptide Society and John Wiley & Sons, Ltd.

  11. ADAPTIF CONSERVATION (ACM MODEL IN INCREASING FAMILY SUPPORT AND COMPLIANCE TREATMENT IN PATIENT WITH PULONARY TUBERCULOSIS IN SURABAYA CITY REGION

    Directory of Open Access Journals (Sweden)

    Siti Nur Kholifah

    2017-04-01

    Full Text Available Introduction: Tuberculosis (TB in Indonesia is still health problem and the prevalence rate is high. Discontinuing medication and lack of family support are the causalities. Numbers of strategies to overcome are seemingly not succeeded. Roles and responsibilities of family nursing are crucial to improve participation, motivation of individual, family and community in prevention, including pulmonary tuberculosis. Unfortunately, models of pulmonary tuberculosis currently unavailable. The combination of adaptation and conservation in complementarily improving family support and compliance in medication is introduced in this study. Method: This research intended to analyze Adaptive Conservation Model (ACM in extending family support and treatment compliance. Modeling steps including model analysis, expert validation, field trial, implementation and recommending the output model. Research subject involves 15 families who implement family Assistance and supervision in Medication (ASM and other 15 families with ACM. Result: The study revealed ACM is better than ASM on the case of family support and medication compliances. It supports the role of environment as influential factor on individual health belief, values and decision making. Therefore, it is advised to apply ACM in enhancing family support and compliance of pulmonary TB patients. Discussion: Social and family supports to ACM group obtained by developing interaction through communication. Family interaction necessary to improve family support to pulmonary tuberculosis patients. And social support plays as motivator to maintain compliance on medication

  12. Improving simulated long-term responses of vegetation to temperature and precipitation extremes using the ACME land model

    Science.gov (United States)

    Ricciuto, D. M.; Warren, J.; Guha, A.

    2017-12-01

    While carbon and energy fluxes in current Earth system models generally have reasonable instantaneous responses to extreme temperature and precipitation events, they often do not adequately represent the long-term impacts of these events. For example, simulated net primary productivity (NPP) may decrease during an extreme heat wave or drought, but may recover rapidly to pre-event levels following the conclusion of the extreme event. However, field measurements indicate that long-lasting damage to leaves and other plant components often occur, potentially affecting the carbon and energy balance for months after the extreme event. The duration and frequency of such extreme conditions is likely to shift in the future, and therefore it is critical for Earth system models to better represent these processes for more accurate predictions of future vegetation productivity and land-atmosphere feedbacks. Here we modify the structure of the Accelerated Climate Model for Energy (ACME) land surface model to represent long-term impacts and test the improved model against observations from experiments that applied extreme conditions in growth chambers. Additionally, we test the model against eddy covariance measurements that followed extreme conditions at selected locations in North America, and against satellite-measured vegetation indices following regional extreme events.

  13. Evidence for heterogeneity of astrocyte de-differentiation in vitro: astrocytes transform into intermediate precursor cells following induction of ACM from scratch-insulted astrocytes.

    Science.gov (United States)

    Yang, Hao; Qian, Xin-Hong; Cong, Rui; Li, Jing-wen; Yao, Qin; Jiao, Xi-Ying; Ju, Gong; You, Si-Wei

    2010-04-01

    Our previous study definitely demonstrated that the mature astrocytes could undergo a de-differentiation process and further transform into pluripotential neural stem cells (NSCs), which might well arise from the effect of diffusible factors released from scratch-insulted astrocytes. However, these neurospheres passaged from one neurosphere-derived from de-differentiated astrocytes possessed a completely distinct characteristic in the differentiation behavior, namely heterogeneity of differentiation. The heterogeneity in cell differentiation has become a crucial but elusive issue. In this study, we show that purified astrocytes could de-differentiate into intermediate precursor cells (IPCs) with addition of scratch-insulted astrocyte-conditioned medium (ACM) to the culture, which can express NG2 and A2B5, the IPCs markers. Apart from the number of NG2(+) and A2B5(+) cells, the percentage of proliferative cells as labeled with BrdU progressively increased with prolonged culture period ranging from 1 to 10 days. Meanwhile, the protein level of A2B5 in cells also increased significantly. These results revealed that not all astrocytes could de-differentiate fully into NSCs directly when induced by ACM, rather they generated intermediate or more restricted precursor cells that might undergo progressive de-differentiation to generate NSCs.

  14. A model for the distribution channels planning process

    NARCIS (Netherlands)

    Neves, M.F.; Zuurbier, P.; Campomar, M.C.

    2001-01-01

    Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these

  15. Distributed Prognostic Health Management with Gaussian Process Regression

    Science.gov (United States)

    Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank

    2010-01-01

    Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.

  16. Comparison of the chemical behaviour of humanized ACMS VS. Human IGG radiolabeled with 99mTc

    International Nuclear Information System (INIS)

    Rivero Santamaria, Alejandro; Zayas Crespo, Francisco; Mesa Duennas, Niurka; Castillo Vitloch, Adolfo J.

    2003-01-01

    The purpose of this work is to compare the chemical behaviour of humanized AcMs vs. human IgG radiolabeled with 99 mTc. to this end, 3 immunoglobulins were analyzed, the IgG (human), the humanized monoclonal antibody R3 (Acm-R3h) and the humanized monoclonal antibody T1. The results obtained reveal slight differences as regards the behaviour of theses immunoglobulins before the labelling with 99T c, which shows differences in the chemical behaviour of these proteins. Although in theory the modifications that are made to the AcMs in order to humanize them must not affect their chemical behaviour, the obtained data indicate that the conditions for their radiolabelling should not be extrapolated from other proteins; on the contrary, particular procedures should be elaborated for each AcM-h

  17. AIRSAR Automated Web-based Data Processing and Distribution System

    Science.gov (United States)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  18. Standard services for the capture, processing, and distribution of packetized telemetry data

    Science.gov (United States)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  19. Distributed collaborative team effectiveness: measurement and process improvement

    Science.gov (United States)

    Wheeler, R.; Hihn, J.; Wilkinson, B.

    2002-01-01

    This paper describes a measurement methodology developed for assessing the readiness, and identifying opportunities for improving the effectiveness, of distributed collaborative design teams preparing to conduct a coccurent design session.

  20. Clinical Laboratory Data Management: A Distributed Data Processing Solution

    OpenAIRE

    Levin, Martin; Morgner, Raymond; Packer, Bernice

    1980-01-01

    Two turn-key systems, one for patient registration and the other for the clinical laboratory have been installed and linked together at the Hospital of the University of Pennsylvania, forming the nucleus of an evolving distributed Hospital Information System.

  1. Radiation chemistry of polymer degradation processes: molecular weight distribution effects

    International Nuclear Information System (INIS)

    Viswanathan, N.S.

    1976-01-01

    The molecular weight distributions of poly(methyl methacrylate) irradiated at 15 and 25 MeV with electron beams were investigated. The experimental values for the effect of chain scissions on the dispersivity agreed well with theoretical predictions

  2. Distributed Multiscale Data Analysis and Processing for Sensor Networks

    National Research Council Canada - National Science Library

    Wagner, Raymond; Sarvotham, Shriram; Choi, Hyeokho; Baraniuk, Richard

    2005-01-01

    .... Second, the communication overhead of multiscale algorithms can become prohibitive. In this paper, we take a first step in addressing both shortcomings by introducing two new distributed multiresolution transforms...

  3. 40 CFR 761.80 - Manufacturing, processing and distribution in commerce exemptions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Manufacturing, processing and..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Exemptions § 761.80 Manufacturing, processing and... any change in the manner of processing and distributing, importing (manufacturing), or exporting of...

  4. Parallel and distributed processing in two SGBDS: A case study

    OpenAIRE

    Francisco Javier Moreno; Nataly Castrillón Charari; Camilo Taborda Zuluaga

    2017-01-01

    Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS), such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. ...

  5. Prediction of residence time distributions in food processing machinery

    DEFF Research Database (Denmark)

    Karlson, Torben; Friis, Alan; Szabo, Peter

    1996-01-01

    The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet.......The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet....

  6. Frequency distributions from birth, death, and creation processes.

    Science.gov (United States)

    Bartley, David L; Ogden, Trevor; Song, Ruiguang

    2002-01-01

    The time-dependent frequency distribution of groups of individuals versus group size was investigated within a continuum approximation, assuming a simplified individual growth, death and creation model. The analogy of the system to a physical fluid exhibiting both convection and diffusion was exploited in obtaining various solutions to the distribution equation. A general solution was approximated through the application of a Green's function. More specific exact solutions were also found to be useful. The solutions were continually checked against the continuum approximation through extensive simulation of the discrete system. Over limited ranges of group size, the frequency distributions were shown to closely exhibit a power-law dependence on group size, as found in many realizations of this type of system, ranging from colonies of mutated bacteria to the distribution of surnames in a given population. As an example, the modeled distributions were successfully fit to the distribution of surnames in several countries by adjusting the parameters specifying growth, death and creation rates.

  7. AcmB Is an S-Layer-Associated β-N-Acetylglucosaminidase and Functional Autolysin in Lactobacillus acidophilus NCFM.

    Science.gov (United States)

    Johnson, Brant R; Klaenhammer, Todd R

    2016-09-15

    Autolysins, also known as peptidoglycan hydrolases, are enzymes that hydrolyze specific bonds within bacterial cell wall peptidoglycan during cell division and daughter cell separation. Within the genome of Lactobacillus acidophilus NCFM, there are 11 genes encoding proteins with peptidoglycan hydrolase catalytic domains, 9 of which are predicted to be functional. Notably, 5 of the 9 putative autolysins in L. acidophilus NCFM are S-layer-associated proteins (SLAPs) noncovalently colocalized along with the surface (S)-layer at the cell surface. One of these SLAPs, AcmB, a β-N-acetylglucosaminidase encoded by the gene lba0176 (acmB), was selected for functional analysis. In silico analysis revealed that acmB orthologs are found exclusively in S-layer- forming species of Lactobacillus Chromosomal deletion of acmB resulted in aberrant cell division, autolysis, and autoaggregation. Complementation of acmB in the ΔacmB mutant restored the wild-type phenotype, confirming the role of this SLAP in cell division. The absence of AcmB within the exoproteome had a pleiotropic effect on the extracellular proteins covalently and noncovalently bound to the peptidoglycan, which likely led to the observed decrease in the binding capacity of the ΔacmB strain for mucin and extracellular matrices fibronectin, laminin, and collagen in vitro These data suggest a functional association between the S-layer and the multiple autolysins noncovalently colocalized at the cell surface of L. acidophilus NCFM and other S-layer-producing Lactobacillus species. Lactobacillus acidophilus is one of the most widely used probiotic microbes incorporated in many dairy foods and dietary supplements. This organism produces a surface (S)-layer, which is a self-assembling crystalline array found as the outermost layer of the cell wall. The S-layer, along with colocalized associated proteins, is an important mediator of probiotic activity through intestinal adhesion and modulation of the mucosal immune

  8. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    Science.gov (United States)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  9. Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?

    Science.gov (United States)

    Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.

    2016-02-01

    It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.

  10. Additive Construction with Mobile Emplacement (ACME) / Automated Construction of Expeditionary Structures (ACES) Materials Delivery System (MDS)

    Science.gov (United States)

    Mueller, R. P.; Townsend, I. I.; Tamasy, G. J.; Evers, C. J.; Sibille, L. J.; Edmunson, J. E.; Fiske, M. R.; Fikes, J. C.; Case, M.

    2018-01-01

    The purpose of the Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) project is to incorporate the Liquid Goods Delivery System (LGDS) into the Dry Goods Delivery System (DGDS) structure to create an integrated and automated Materials Delivery System (MDS) for 3D printing structures with ordinary Portland cement (OPC) concrete. ACES 3 is a prototype for 3-D printing barracks for soldiers in forward bases, here on Earth. The LGDS supports ACES 3 by storing liquid materials, mixing recipe batches of liquid materials, and working with the Dry Goods Feed System (DGFS) previously developed for ACES 2, combining the materials that are eventually extruded out of the print nozzle. Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) is a project led by the US Army Corps of Engineers (USACE) and supported by NASA. The equivalent 3D printing system for construction in space is designated Additive Construction with Mobile Emplacement (ACME) by NASA.

  11. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    Science.gov (United States)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  12. Ionization processes in a transient hollow cathode discharge before electric breakdown: statistical distribution

    International Nuclear Information System (INIS)

    Zambra, M.; Favre, M.; Moreno, J.; Wyndham, E.; Chuaqui, H.; Choi, P.

    1998-01-01

    The charge formation processes in a hollow cathode region (HCR) of transient hollow cathode discharge have been studied at the final phase. The statistical distribution that describe different processes of ionization have been represented by Gaussian distributions. Nevertheless, was observed a better representation of these distributions when the pressure is near a minimum value, just before breakdown

  13. The redesign of a warranty distribution network with recovery processes

    NARCIS (Netherlands)

    Ashayeri, J.; Ma, N.; Sotirov, R.

    A warranty distribution network provides aftersales warranty services to customers and resembles a closed-loop supply chain network with specific challenges for reverse flows management like recovery, repair, and reflow of refurbished products. We present here a nonlinear and nonconvex mixed integer

  14. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  15. Parallel and distributed processing in two SGBDS: A case study

    Directory of Open Access Journals (Sweden)

    Francisco Javier Moreno

    2017-04-01

    Full Text Available Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS, such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. Results: The tests of the SQL query evidenced the performance differences between the two DBMS analyzed. However, more thorough testing and a wider variety of queries are needed. Conclusions: The differences in performance between the two DBMSs analyzed show that when evaluating this aspect, it is necessary to consider the particularities of each DBMS and the degree of parallelism of the queries.

  16. Process planning and accuracy distribution of marine power plant modularization

    Directory of Open Access Journals (Sweden)

    ZHANG Jinguo

    2018-02-01

    Full Text Available [Objectives] Modular shipbuilding can shorten the cycle of design and construction, lower production costs and improve the quality of products, but higher shipbuilding capabilities are required, especially for the installation of power plants. Because of such characteristics of modular shipbuilding as the high precision of docking links, long size equipment installation chain and quantitative docking interfaces, docking installation is very difficult due to high docking deviation and low accuracy of docking installation, leading to the abnormal vibration of equipment. In order to solve this problem, [Methods] on the basis of domestic shipbuilding capability, numerical calculation methods are used to analyze the accuracy distribution of modular installation. [Results] The results show that the accuracy distribution of different docking links is reasonable and feasible, and the setting of adjusting allowance matches the requirements of shipbuilding. [Conclusions] This method provides a reference for the modular construction of marine power plants.

  17. Distributed automatic control of technological processes in conditions of weightlessness

    Science.gov (United States)

    Kukhtenko, A. I.; Merkulov, V. I.; Samoylenko, Y. I.; Ladikov-Royev, Y. P.

    1986-01-01

    Some problems associated with the automatic control of liquid metal and plasma systems under conditions of weightlessness are examined, with particular reference to the problem of stability of liquid equilibrium configurations. The theoretical fundamentals of automatic control of processes in electrically conducting continuous media are outlined, and means of using electromagnetic fields for simulating technological processes in a space environment are discussed.

  18. Processes determining the marine alkalinity and carbonate saturation distributions

    OpenAIRE

    B. R. Carter; J. R. Toggweiler; R. M. Key; J. L. Sarmiento

    2014-01-01

    We introduce a composite tracer, Alk*, that has a global distribution primarily determined by CaCO3 precipitation and dissolution. Alk* also highlights riverine alkalinity plumes that are due to dissolved calcium carbonate from land. We estimate the Arctic receives approximately twice the riverine alkalinity per unit area as the Atlantic, and 8 times that of the other oceans. Riverine inputs broadly elevate Alk* in the Arctic surface and particularly near ri...

  19. Asymmetric photoelectron angular distributions from interfering photoionization processes

    International Nuclear Information System (INIS)

    Yin, Y.; Chen, C.; Elliott, D.S.; Smith, A.V.

    1992-01-01

    We have measured asymmetric photoelectron angular distributions for atomic rubidium. Ionization is induced by a one-photon interaction with 280 nm light and by a two-photon interaction with 560 nm light. Interference between the even- and odd-parity free-electron wave functions allows us to control the direction of maximum electron flux by varying the relative phase of the two laser fields

  20. A functional collagen adhesin gene, acm, in clinical isolates of Enterococcus faecium correlates with the recent success of this emerging nosocomial pathogen.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Singh, Kavindra V; Okhuysen, Pablo C; Murray, Barbara E

    2008-09-01

    Enterococcus faecium recently evolved from a generally avirulent commensal into a multidrug-resistant health care-associated pathogen causing difficult-to-treat infections, but little is known about the factors responsible for this change. We previously showed that some E. faecium strains express a cell wall-anchored collagen adhesin, Acm. Here we analyzed 90 E. faecium isolates (99% acm(+)) and found that the Acm protein was detected predominantly in clinically derived isolates, while the acm gene was present as a transposon-interrupted pseudogene in 12 of 47 isolates of nonclinical origin. A highly significant association between clinical (versus fecal or food) origin and collagen adherence (P Acm detected by whole-cell enzyme-linked immunosorbent assay and flow cytometry. Thirty-seven of 41 sera from patients with E. faecium infections showed reactivity with recombinant Acm, while only 4 of 30 community and hospitalized patient control group sera reacted (P Acm were present in all 14 E. faecium endocarditis patient sera. Although pulsed-field gel electrophoresis indicated that multiple strains expressed collagen adherence, multilocus sequence typing demonstrated that the majority of collagen-adhering isolates, as well as 16 of 17 endocarditis isolates, are part of the hospital-associated E. faecium genogroup referred to as clonal complex 17 (CC17), which has emerged globally. Taken together, our findings support the hypothesis that Acm has contributed to the emergence of E. faecium and CC17 in nosocomial infections.

  1. HT 2011 : Proceedings of the 22nd ACM Conference on Hypertext and Hypermedia, Eindhoven, The Netherlands, June 6-9, 2011

    NARCIS (Netherlands)

    De Bra, P.M.E.; Gronbak, K.

    2011-01-01

    Foreword. It is our great pleasure to welcome you to ACM Hypertext 2011, the 22nd ACM Conference on Hypertext and Hypermedia, and to the "Land of the Innovator", the campus of the Eindhoven University of Technology, located in the "city of light" Eindhoven, the Netherlands. Hypertext is an exciting

  2. Parallel Hyperspectral Image Processing on Distributed Multi-Cluster Systems

    NARCIS (Netherlands)

    Liu, F.; Seinstra, F.J.; Plaza, A.J.

    2011-01-01

    Computationally efficient processing of hyperspectral image cubes can be greatly beneficial in many application domains, including environmental modeling, risk/hazard prevention and response, and defense/security. As individual cluster computers often cannot satisfy the computational demands of

  3. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  4. Analysis of the logistics processes in the wine distribution

    OpenAIRE

    Slavkovský, Matúš

    2011-01-01

    Master's thesis is referring the importance of logistics in the retail business and the importance of reducing logistics costs. It includes so theoretical knowledge as well as the analysis of the relevant markets, which are producing and consuming wine in the largest quantities. Thesis is focused on analysis of the logistical processes and costs of an e-shop. Based on this analysis measures to improve the logistics of the process of the company are proposed. The goal of the Master's thesis is...

  5. 80 A/cm2 electron beams from metal targets irradiated by KrCl and XeCl excimer lasers

    Science.gov (United States)

    Beloglazov, A.; Martino, M.; Nassisi, V.

    1996-05-01

    Due to the growing demand for high-current and long-duration electron-beam devices, laser electron sources were investigated in our laboratory. Experiments on electron-beam generation and propagation from aluminium and copper targets illuminated by XeCl (308 nm) and KrCl (222 nm) excimer lasers, were carried out under plasma ignition due to laser irradiation. This plasma supplied a spontaneous accelerating electric field of about 370 kV/m without an external accelerating voltage. By applying the modified one-dimensional Poisson equation, we computed the expected current and we also estimated the plasma concentration during the accelerating process. At 40 kV of accelerating voltage, an output current pulse of about 80 A/cm2 was detected from an Al target irradiated by the shorter wavelength laser.

  6. Prescription-induced jump distributions in multiplicative Poisson processes.

    Science.gov (United States)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  7. Prescription-induced jump distributions in multiplicative Poisson processes

    Science.gov (United States)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  8. The certification process of the LHCb distributed computing software

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    DIRAC contains around 200 thousand lines of python code, and LHCbDIRAC around 120 thousand. The testing process for each release consists of a number of steps, that includes static code analysis, unit tests, integration tests, regression tests, system tests. We dubbed the full p...

  9. Novel scaling of the multiplicity distributions in the sequential fragmentation process and in the percolation

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A novel scaling of the multiplicity distributions is found in the shattering phase of the sequential fragmentation process with inhibition. The same scaling law is shown to hold in the percolation process. (author)

  10. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  11. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  12. Investigation of Thermal Stress Distribution in Laser Spot Welding Process

    OpenAIRE

    Osamah F. Abdulateef

    2009-01-01

    The objective of this paper was to study the laser spot welding process of low carbon steel sheet. The investigations were based on analytical and finite element analyses. The analytical analysis was focused on a consistent set of equations representing interaction of the laser beam with materials. The numerical analysis based on 3-D finite element analysis of heat flow during laser spot welding taken into account the temperature dependence of the physical properties and latent heat of transf...

  13. A trial of distributed portable data acquisition and processing system implementation: the qdpb - data processing with branchpoints

    International Nuclear Information System (INIS)

    Gritsaj, K.I.; Isupov, A.Yu.

    2001-01-01

    A trial of distributed portable data acquisition and processing system qdpb is issued. An experimental setup data and hardware dependent code is separated from the generic part of the qdpb system. The generic part implementation is described

  14. Distributed Processing System for Restoration of Electric Power Distribution Network Using Two-Layered Contract Net Protocol

    Science.gov (United States)

    Kodama, Yu; Hamagami, Tomoki

    Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.

  15. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  16. On process capability and system availability analysis of the inverse Rayleigh distribution

    Directory of Open Access Journals (Sweden)

    Sajid Ali

    2015-04-01

    Full Text Available In this article, process capability and system availability analysis is discussed for the inverse Rayleigh lifetime distribution. Bayesian approach with a conjugate gamma distribution is adopted for the analysis. Different types of loss functions are considered to find Bayes estimates of the process capability and system availability. A simulation study is conducted for the comparison of different loss functions.

  17. Equivalence of functional limit theorems for stationary point processes and their Palm distributions

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1989-01-01

    Let P be the distribution of a stationary point process on the real line and let P0 be its Palm distribution. In this paper we consider two types of functional limit theorems, those in terms of the number of points of the point process in (0, t] and those in terms of the location of the nth point

  18. Design of distributed systems of hydrolithospere processes management. Selection of optimal number of extracting wells

    Science.gov (United States)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The article considers the important issue of designing the distributed systems of hydrolithospere processes management. Control effects on the hydrolithospere processes are implemented by a set of extractive wells. The article shows how to determine the optimal number of extractive wells that provide a distributed control impact on the management object.

  19. A reconfigurable strategy for distributed digital process control

    International Nuclear Information System (INIS)

    Garcia, H.E.; Ray, A.; Edwards, R.M.

    1990-01-01

    A reconfigurable control scheme is proposed which, unlike a preprogrammed one, uses stochastic automata to learn the current operating status of the environment (i.e., the plant, controller, and communication network) by dynamically monitoring the system performance and then switching to the appropriate controller on the basis of these observations. The potential applicability of this reconfigurable control scheme to electric power plants is being investigated. The plant under consideration is the Experimental Breeder Reactor (EBR-II) at the Argonne National Laboratory site in Idaho. The distributed control system is emulated on a ring network where the individual subsystems are hosted as follows: (1) the reconfigurable control modules are located in one of the network modules called Multifunction Controller; (2) the learning modules are resident in a VAX 11/785 mainframe computer; and (3) a detailed model of the plant under control is executed in the same mainframe. This configuration is a true representation of the network-based control system in the sense that it operates in real time and is capable of interacting with the actual plant

  20. Marketing promotion in the consumer goods’ retail distribution process

    Directory of Open Access Journals (Sweden)

    S.Bălăşescu

    2013-06-01

    Full Text Available The fundamental characteristic of contemporary marketing is the total opening towards three major directions: consumer needs, organization needs and society’s needs. The continuous expansion of marketing has been accompanied by a process of differentiation and specialization. Differentiation has led to the so called “specific marketing”. In this paper, we aim to explain that in the retail companies, the concept of sales marketing can be distinguished as an independent marketing specialization. The main objectives for this paper are: the definition and delimitation of consumer goods’ sales marketing in the retail business and the sectoral approach of the marketing concept and its specific techniques for the retail activities.

  1. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  2. Data processing and distribution in the PAMELA experiment

    International Nuclear Information System (INIS)

    Casolino, M.; Nagni, M.

    2007-01-01

    YODA is a semi-automated data handling and analysis system for the PAMELA space experiment. The core of the routines have been developed to process a stream of raw data downlinked from the Resurs DK1 satellite (housing PAMELA) to the ground station in Moscow. Raw data consist of scientific data and engineering information. Housekeeping information are analyzed in a short time from download (∼hours) in order to monitor the status of the experiment and for the mission planning. A prototype for the data visualization runs on an APACHE TOMCAT web application server, providing an off-line analysis tool using a browser and part of code for the system maintenance. A quicklook system with GUI interface is used for operator monitoring and fast macrocommand issuing. On a longer timescale scientific data are analyzed, calibrations performed and the database adjourned. The data storage core is composed of CERN's ROOT files structure and MySQL as a relational database. YODA++ is currently being used in the integration and testing of ground PAMELA data

  3. A role for distributed processing in advanced nuclear materials control and accountability systems

    International Nuclear Information System (INIS)

    Tisinger, R.M.; Whitty, W.J.; Ford, W.; Strittmatter, R.B.

    1986-01-01

    Networking and distributed processing hardware and software have the potential of greatly enhancing nuclear materials control and account-ability (MCandA) systems, both from safeguards and process operations perspectives while allowing timely integrated safeguards activities and enhanced computer security at reasonable cost. A hierarchical distributed system is proposed consisting of groups of terminals and instruments in plant production and support areas connected to microprocessors that are connected to either larger microprocessors or minicomputers. The structuring and development of a limited distributed MCandA prototype system, including human engineering concepts, are described. Implications of integrated safeguards and computer security concepts to the distributed system design are discussed

  4. Selective desulfurization of cysteine in the presence of Cys(Acm) in polypeptides obtained by native chemical ligation.

    Science.gov (United States)

    Pentelute, Brad L; Kent, Stephen B H

    2007-02-15

    Increased versatility for the synthesis of proteins and peptides by native chemical ligation requires the ability to ligate at positions other than Cys. Here, we report that Raney nickel can be used under standard conditions for the selective desulfurization of Cys in the presence of Cys(Acm). This simple and practical tactic enables the more common Xaa-Ala junctions to be used as ligation sites for the chemical synthesis of Cys-containing peptides and proteins. [reaction: see text].

  5. Segmentation of solid subregion of high grade gliomas in MRI images based on active contour model (ACM)

    Science.gov (United States)

    Seow, P.; Win, M. T.; Wong, J. H. D.; Abdullah, N. A.; Ramli, N.

    2016-03-01

    Gliomas are tumours arising from the interstitial tissue of the brain which are heterogeneous, infiltrative and possess ill-defined borders. Tumour subregions (e.g. solid enhancing part, edema and necrosis) are often used for tumour characterisation. Tumour demarcation into substructures facilitates glioma staging and provides essential information. Manual segmentation had several drawbacks that include laborious, time consuming, subjected to intra and inter-rater variability and hindered by diversity in the appearance of tumour tissues. In this work, active contour model (ACM) was used to segment the solid enhancing subregion of the tumour. 2D brain image acquisition data using 3T MRI fast spoiled gradient echo sequence in post gadolinium of four histologically proven high-grade glioma patients were obtained. Preprocessing of the images which includes subtraction and skull stripping were performed and then followed by ACM segmentation. The results of the automatic segmentation method were compared against the manual delineation of the tumour by a trainee radiologist. Both results were further validated by an experienced neuroradiologist and a brief quantitative evaluations (pixel area and difference ratio) were performed. Preliminary results of the clinical data showed the potential of ACM model in the application of fast and large scale tumour segmentation in medical imaging.

  6. Segmentation of solid subregion of high grade gliomas in MRI images based on active contour model (ACM)

    International Nuclear Information System (INIS)

    Seow, P; Win, M T; Wong, J H D; Ramli, N; Abdullah, N A

    2016-01-01

    Gliomas are tumours arising from the interstitial tissue of the brain which are heterogeneous, infiltrative and possess ill-defined borders. Tumour subregions (e.g. solid enhancing part, edema and necrosis) are often used for tumour characterisation. Tumour demarcation into substructures facilitates glioma staging and provides essential information. Manual segmentation had several drawbacks that include laborious, time consuming, subjected to intra and inter-rater variability and hindered by diversity in the appearance of tumour tissues. In this work, active contour model (ACM) was used to segment the solid enhancing subregion of the tumour. 2D brain image acquisition data using 3T MRI fast spoiled gradient echo sequence in post gadolinium of four histologically proven high-grade glioma patients were obtained. Preprocessing of the images which includes subtraction and skull stripping were performed and then followed by ACM segmentation. The results of the automatic segmentation method were compared against the manual delineation of the tumour by a trainee radiologist. Both results were further validated by an experienced neuroradiologist and a brief quantitative evaluations (pixel area and difference ratio) were performed. Preliminary results of the clinical data showed the potential of ACM model in the application of fast and large scale tumour segmentation in medical imaging. (paper)

  7. Recent changes in flood damage in the United States from observations and ACME model

    Science.gov (United States)

    Leng, G.; Leung, L. R.

    2017-12-01

    Despite efforts to mitigate flood hazards in flood-prone areas, survey- and report-based flood databases show that flood damage has increased and emerged as one of the most costly disaster in the United States since the 1990s. Understanding the mechanism driving the changes in flood damage is therefore critical for reducing flood risk. In this study, we first conduct a comprehensive analysis of the changing characteristics of flood damage at local, state and country level. Results show a significant increasing trend in the number of flood hazards, causing economic losses of up to $7 billion per year. The ratio of flood events that caused tangible economical cost to the total flood events has exhibited a non-significant increasing trend before 2007 followed by a significant decrease, indicating a changing vulnerability to floods. Analysis also reveals distinct spatial and temporal patterns in the threshold intensity of flood hazards with tangible economical cost. To understand the mechanism behind the increasing flood damage, we develop a flood damage economic model coupled with the integrated hydrological modeling system of ACME that features a river routing model with an inundation parameterization and a water use and regulation model. The model is evaluated over the country against historical records. Several numerical experiments are then designed to explore the mechanisms behind the recent changes in flood damage from the perspective of flood hazard, exposure and vulnerability, which constitute flood damage. The role of human activities such as reservoir operations and water use in modifying regional floods are also explored using the new tool, with the goal of improving understanding and modeling of vulnerability to flood hazards.

  8. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    Science.gov (United States)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  9. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  10. Distributed processing in receivers based on tensor for cooperative communications systems

    OpenAIRE

    Igor FlÃvio SimÃes de Sousa

    2014-01-01

    In this dissertation, we present a distributed data estimation and detection approach for the uplink of a network that uses CDMA at transmitters (users). The analyzed network can be represented by an undirected and connected graph, where the nodes use a distributed estimation algorithm based on consensus averaging to perform joint channel and symbol estimation using a receiver based on tensor signal processing. The centralized receiver, developed for a central base station, and the distribute...

  11. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  12. a New Initiative for Tiling, Stitching and Processing Geospatial Big Data in Distributed Computing Environments

    Science.gov (United States)

    Olasz, A.; Nguyen Thai, B.; Kristóf, D.

    2016-06-01

    Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  13. A NEW INITIATIVE FOR TILING, STITCHING AND PROCESSING GEOSPATIAL BIG DATA IN DISTRIBUTED COMPUTING ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    A. Olasz

    2016-06-01

    Full Text Available Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/ developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu. The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows written in different development frameworks (e.g. Python, R or C#. It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  14. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  15. Bridging the gap between a stationary point process and its Palm distribution

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1994-01-01

    In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to

  16. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  17. Development and first application of an Aerosol Collection Module (ACM) for quasi online compound specific aerosol measurements

    Science.gov (United States)

    Hohaus, Thorsten; Kiendler-Scharr, Astrid; Trimborn, Dagmar; Jayne, John; Wahner, Andreas; Worsnop, Doug

    2010-05-01

    Atmospheric aerosols influence climate and human health on regional and global scales (IPCC, 2007). In many environments organics are a major fraction of the aerosol influencing its properties. Due to the huge variety of organic compounds present in atmospheric aerosol current measurement techniques are far from providing a full speciation of organic aerosol (Hallquist et al., 2009). The development of new techniques for compound specific measurements with high time resolution is a timely issue in organic aerosol research. Here we present first laboratory characterisations of an aerosol collection module (ACM) which was developed to allow for the sampling and transfer of atmospheric PM1 aerosol. The system consists of an aerodynamic lens system focussing particles on a beam. This beam is directed to a 3.4 mm in diameter surface which is cooled to -30 °C with liquid nitrogen. After collection the aerosol sample can be evaporated from the surface by heating it to up to 270 °C. The sample is transferred through a 60cm long line with a carrier gas. In order to test the ACM for linearity and sensitivity we combined it with a GC-MS system. The tests were performed with octadecane aerosol. The octadecane mass as measured with the ACM-GC-MS was compared versus the mass as calculated from SMPS derived total volume. The data correlate well (R2 0.99, slope of linear fit 1.1) indicating 100 % collection efficiency. From 150 °C to 270 °C no effect of desorption temperature on transfer efficiency could be observed. The ACM-GC-MS system was proven to be linear over the mass range 2-100 ng and has a detection limit of ~ 2 ng. First experiments applying the ACM-GC-MS system were conducted at the Jülich Aerosol Chamber. Secondary organic aerosol (SOA) was formed from ozonolysis of 600 ppbv of b-pinene. The major oxidation product nopinone was detected in the aerosol and could be shown to decrease from 2 % of the total aerosol to 0.5 % of the aerosol over the 48 hours of

  18. A Prediction of the Damping Properties of Hindered Phenol AO-60/polyacrylate Rubber (AO-60/ACM) Composites through Molecular Dynamics Simulation

    Science.gov (United States)

    Yang, Da-Wei; Zhao, Xiu-Ying; Zhang, Geng; Li, Qiang-Guo; Wu, Si-Zhu

    2016-05-01

    Molecule dynamics (MD) simulation, a molecular-level method, was applied to predict the damping properties of AO-60/polyacrylate rubber (AO-60/ACM) composites before experimental measures were performed. MD simulation results revealed that two types of hydrogen bond, namely, type A (AO-60) -OH•••O=C- (ACM), type B (AO-60) - OH•••O=C- (AO-60) were formed. Then, the AO-60/ACM composites were fabricated and tested to verify the accuracy of the MD simulation through dynamic mechanical thermal analysis (DMTA). DMTA results showed that the introduction of AO-60 could remarkably improve the damping properties of the composites, including the increase of glass transition temperature (Tg) alongside with the loss factor (tan δ), also indicating the AO-60/ACM(98/100) had the best damping performance amongst the composites which verified by the experimental.

  19. Analysing the Outbound logistics process enhancements in Nokia-Siemens Networks Global Distribution Center

    OpenAIRE

    Marjeta, Katri

    2011-01-01

    Marjeta, Katri. 2011. Analysing the outbound logistics process enhancements in Nokia-Siemens Networks Global Distribution Center. Master´s thesis. Kemi-Tornio University of Applied Sciences. Business and Culture. Pages 57. Due to confidentiality issues, this work has been modified from its original form. The aim of this Master Thesis work is to describe and analyze the outbound logistics process enhancement projects executed in Nokia-Siemens Networks Global Distribution Center after the N...

  20. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  1. Cross-coherent vector sensor processing for spatially distributed glider networks.

    Science.gov (United States)

    Nichols, Brendan; Sabra, Karim G

    2015-09-01

    Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.

  2. Reaction Mechanism and Distribution Behavior of Arsenic in the Bottom Blown Copper Smelting Process

    Directory of Open Access Journals (Sweden)

    Qinmeng Wang

    2017-08-01

    Full Text Available The control of arsenic, a toxic and carcinogenic element, is an important issue for all copper smelters. In this work, the reaction mechanism and distribution behavior of arsenic in the bottom blown copper smelting process (SKS process were investigated and compared to the flash smelting process. There are obvious differences of arsenic distribution in the SKS process and flash process, resulting from the differences of oxygen potentials, volatilizations, smelting temperatures, reaction intensities, and mass transfer processes. Under stable production conditions, the distributions of arsenic among matte, slag, and gas phases are 6%, 12%, and 82%, respectively. Less arsenic is reported in the gas phase with the flash process than with the SKS process. The main arsenic species in gas phase are AsS (g, AsO (g, and As2 (g. Arsenic exists in the slag predominantly as As2O3 (l, and in matte as As (l. High matte grade is harmful to the elimination of arsenic to gas. The changing of Fe/SiO2 has slight effects on the distributions of arsenic. In order to enhance the removal of arsenic from the SKS smelting system to the gas phase, low oxygen concentration, low ratios of oxygen/ore, and low matte grade should be chosen. In the SKS smelting process, no dust is recycled, and almost all dust is collected and further treated to eliminate arsenic and recover valuable metals by other process streams.

  3. Responses of Mixed-Phase Cloud Condensates and Cloud Radiative Effects to Ice Nucleating Particle Concentrations in NCAR CAM5 and DOE ACME Climate Models

    Science.gov (United States)

    Liu, X.; Shi, Y.; Wu, M.; Zhang, K.

    2017-12-01

    Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.

  4. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  5. Building enterprise systems with ODP an introduction to open distributed processing

    CERN Document Server

    Linington, Peter F; Tanaka, Akira; Vallecillo, Antonio

    2011-01-01

    The Reference Model of Open Distributed Processing (RM-ODP) is an international standard that provides a solid basis for describing and building widely distributed systems and applications in a systematic way. It stresses the need to build these systems with evolution in mind by identifying the concerns of major stakeholders and then expressing the design as a series of linked viewpoints. Although RM-ODP has been a standard for more than ten years, many practitioners are still unaware of it. Building Enterprise Systems with ODP: An Introduction to Open Distributed Processing offers a gentle pa

  6. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  7. An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2016-01-01

    Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.

  8. Managing Distributed Innovation Processes in Virtual Organizations by Applying the Collaborative Network Relationship Analysis

    Science.gov (United States)

    Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter

    Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.

  9. The influence of emotion on lexical processing: insights from RT distributional analysis.

    Science.gov (United States)

    Yap, Melvin J; Seow, Cui Shan

    2014-04-01

    In two lexical decision experiments, the present study was designed to examine emotional valence effects on visual lexical decision (standard and go/no-go) performance, using traditional analyses of means and distributional analyses of response times. Consistent with an earlier study by Kousta, Vinson, and Vigliocco (Cognition 112:473-481, 2009), we found that emotional words (both negative and positive) were responded to faster than neutral words. Finer-grained distributional analyses further revealed that the facilitation afforded by valence was reflected by a combination of distributional shifting and an increase in the slow tail of the distribution. This suggests that emotional valence effects in lexical decision are unlikely to be entirely mediated by early, preconscious processes, which are associated with pure distributional shifting. Instead, our results suggest a dissociation between early preconscious processes and a later, more task-specific effect that is driven by feedback from semantically rich representations.

  10. Core power distribution measurement and data processing in Daya Bay Nuclear Power Station

    International Nuclear Information System (INIS)

    Zhang Hong

    1997-01-01

    For the first time in China, Daya Bay Nuclear Power Station applied the advanced technology of worldwide commercial pressurized reactors to the in-core detectors, the leading excore six-chamber instrumentation for precise axial power distribution, and the related data processing. Described in this article are the neutron flux measurement in Daya Bay Nuclear Power Station, and the detailed data processing

  11. Climate Science's Globally Distributed Infrastructure

    Science.gov (United States)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  12. DISCO - A concept of a system for integrated data base management in distributed data processing systems

    International Nuclear Information System (INIS)

    Holler, E.

    1980-01-01

    The development in data processing technology favors the trend towards distributed data processing systems: The large-scale integration of semiconductor devices has lead to very efficient (approx. 10 6 operations per second) and relatively cheap low end computers being offered today, that allow to install distributed data processing systems with a total capacity coming near to that of large-scale data processing plants at a tolerable investment expenditure. The technologies of communication and data banks, each by itself, have reached a state of development justifying their routine application. This is made evident by the present efforts for standardization in both areas. The integration of both technologies in the development of systems for integrated distributed data bank management, however, is new territory for engineering. (orig.) [de

  13. The Analysis of process optimization during the loading distribution test for steam turbine

    International Nuclear Information System (INIS)

    Li Jiangwei; Cao Yuhua; Li Dawei

    2014-01-01

    The loading distribution of steam turbine needs six times to complete in total, the first time is completed when the turbine cylinder buckles, the rest must be completed orderly in the process of installing GVP pipe. To complete 5 tests of loading distribution and installation of GVP pipe, it usually takes around 90 days for most nuclear plants while the unit l of Fuqing Nuclear Power Station compress it into about 45 days by optimizing the installation process. this article describes the successful experience of how the Unit l of Fuqing Nuclear Power Station finished 5 tests of loading distribution and installation of GVP pipe in 45 days by optimizing the process, Meanwhile they analysis the advantages and disadvantages through comparing it with the process provide by suppliers, which brings up some rationalization proposals for installation work to the follow-up units of our plant. (authors)

  14. Distributed system for parallel data processing of ECT signals for electromagnetic flaw detection in materials

    International Nuclear Information System (INIS)

    Guliashki, Vassil; Marinova, Galia

    2002-01-01

    The paper proposes a distributed system for parallel data processing of ECT signals for flaw detection in materials. The measured data are stored in files on a host computer, where a JAVA server is located. The host computer is connected through Internet to a set of client computers, distributed geographically. The data are distributed from the host computer by means of the JAVA server to the client computers according their requests. The software necessary for the data processing is installed on each client computer in advance. The organization of the data processing on many computers, working simultaneously in parallel, leads to great time reducing, especially in cases when huge amount of data should be processed in very short time. (Author)

  15. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    Science.gov (United States)

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  16. Fabrication of 93.7 m long PLD-EuBCO + BaHfO_3 coated conductors with 103 A/cm W at 77 K under 3 T

    International Nuclear Information System (INIS)

    Yoshida, T.; Ibi, A.; Takahashi, T.; Yoshizumi, M.; Izumi, T.; Shiohara, Y.

    2015-01-01

    Highlights: • A 93.7 m long EuBCO + BHO CC with 103 A/cm W at 77 K under 3 T was obtained. • The 93.7 m long CC showed high I_c values and high n-values with high uniformity. • The average I_c value at 77 K under 3 T was estimated by that at 77 K under 0.3 T. - Abstract: Introduction of artificial pinning centers such as BaHfO_3 (BHO), BaZrO_3 (BZO) and BaSnO_3 (BSO) into REBa_2Cu_3O_7_−_δ (REBCO) coated conductor (CC) layers could improve the in-field critical currents (I_c) in wide ranges of temperatures and magnetic fields. In particular, a combination of EuBCO + BHO has been found to be effective for attaining high in-field I_c performance by means of IBAD/PLD process in short length samples. In this work, we have successfully fabricated a 93.7 m long EuBCO + BHO CC with 103 A/cm W at 77 K under a magnet field (B) of 3 T applied perpendicular to the CC (B//c). The 93.7 m long EuBCO + BHO CC had high uniformity of I_c values and n-values without any trend of fluctuations, independent of the external field up to 0.3 T. I_c–B–applied angle (θ) profiles of the 93.7 m long EuBCO + BHO CC sample showed the high in-field I_c values in all directions of applied magnetic fields especially B//c (at θ ∼ 180°, I_c = 157 A/cm W) at 77 K under 3 T. The profiles were about the same as those in a short length sample.

  17. Determination of material distribution in heading process of small bimetallic bar

    Science.gov (United States)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  18. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  19. On the joint distribution of excursion duration and amplitude of a narrow-band Gaussian process

    DEFF Research Database (Denmark)

    Ghane, Mahdi; Gao, Zhen; Blanke, Mogens

    2018-01-01

    of amplitude and period are limited to excursion through a mean-level or to describe the asymptotic behavior of high level excursions. This paper extends the knowledge by presenting a theoretical derivation of probability of wave exceedance amplitude and duration, for a narrow-band Gaussian process......The probability density of crest amplitude and of duration of exceeding a given level are used in many theoretical and practical problems in engineering. The joint density is essential for design of constructions that are subjected to waves and wind. The presently available joint distributions...... distribution, as expected, and that the marginal distribution of excursion duration works both for asymptotic and non-asymptotic cases. The suggested model is found to be a good replacement for the empirical distributions that are widely used. Results from simulations of narrow-band Gaussian processes, real...

  20. THE FEATURES OF LASER EMISSION ENERGY DISTRIBUTION AT MATHEMATIC MODELING OF WORKING PROCESS

    Directory of Open Access Journals (Sweden)

    A. M. Avsiyevich

    2013-01-01

    Full Text Available The space laser emission energy distribution of different continuous operation settings depends from many factors, first on the settings design. For more accurate describing of multimode laser emission energy distribution intensity the experimental and theoretic model, which based on experimental laser emission distribution shift presentation with given accuracy rating in superposition basic function form, is proposed. This model provides the approximation error only 2,2 percent as compared with 24,6 % and 61 % for uniform and Gauss approximation accordingly. The proposed model usage lets more accurate take into consideration the laser emission and working surface interaction peculiarity, increases temperature fields calculation accuracy for mathematic modeling of laser treatment processes. The method of experimental laser emission energy distribution studying for given source and mathematic apparatus for calculation of laser emission energy distribution intensity parameters depended from the distance in radial direction on surface heating zone are shown.

  1. Distribution of chirality in the quantum walk: Markov process and entanglement

    International Nuclear Information System (INIS)

    Romanelli, Alejandro

    2010-01-01

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible to find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.

  2. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery

    OpenAIRE

    Grumbach, Stéphane; Riemann, Robert

    2017-01-01

    International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...

  3. Problem of uniqueness in the renewal process generated by the uniform distribution

    Directory of Open Access Journals (Sweden)

    D. Ugrin-Šparac

    1992-01-01

    Full Text Available The renewal process generated by the uniform distribution, when interpreted as a transformation of the uniform distribution into a discrete distribution, gives rise to the question of uniqueness of the inverse image. The paper deals with a particular problem from the described domain, that arose in the construction of a complex stochastic test intended to evaluate pseudo-random number generators. The connection of the treated problem with the question of a unique integral representation of Gamma-function is also mentioned.

  4. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    Science.gov (United States)

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  5. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  6. Cumulative distribution functions associated with bubble-nucleation processes in cavitation

    KAUST Repository

    Watanabe, Hiroshi

    2010-11-15

    Bubble-nucleation processes of a Lennard-Jones liquid are studied by molecular dynamics simulations. Waiting time, which is the lifetime of a superheated liquid, is determined for several system sizes, and the apparent finite-size effect of the nucleation rate is observed. From the cumulative distribution function of the nucleation events, the bubble-nucleation process is found to be not a simple Poisson process but a Poisson process with an additional relaxation time. The parameters of the exponential distribution associated with the process are determined by taking the relaxation time into account, and the apparent finite-size effect is removed. These results imply that the use of the arithmetic mean of the waiting time until a bubble grows to the critical size leads to an incorrect estimation of the nucleation rate. © 2010 The American Physical Society.

  7. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  8. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    Science.gov (United States)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  9. The brain as a distributed intelligent processing system: an EEG study.

    Science.gov (United States)

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-03-15

    Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Wechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. The present results support these claims and the neural efficiency hypothesis.

  10. The brain as a distributed intelligent processing system: an EEG study.

    Directory of Open Access Journals (Sweden)

    Armando Freitas da Rocha

    Full Text Available BACKGROUND: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS, first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. METHODOLOGY AND PRINCIPAL FINDINGS: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Wechsler Adult Intelligence Scale and WISC (Wechsler Intelligence Scale for Children, and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. CONCLUSION: The present results support these claims and the neural efficiency hypothesis.

  11. The Brain as a Distributed Intelligent Processing System: An EEG Study

    Science.gov (United States)

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-01-01

    Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657

  12. Simulation of business processes of processing and distribution of orders in transportation

    Directory of Open Access Journals (Sweden)

    Ольга Ігорівна Проніна

    2017-06-01

    Full Text Available Analyzing modern passenger transportation in Ukraine, we can conclude that with the increasing number of urban population the necessity to develop passenger traffic, as well as to improve the quality of transport services is increasing too. The paper examines the three existing models of private passenger transportation (taxi: a model with the use of dispatching service, without dispatching service model and a mixed model. An algorithm of getting an order, processing it, and its implementation according to the given model has been considered. Several arrangements schemes that characterize the operation of the system have been shown in the work as well. The interrelation of the client making an order and the driver who receives the order and executes it has been represented, the server being a connecting link between the customer and the driver and regulating the system as a whole. Business process of private passenger transportation without dispatching service was simulated. Basing on the simulation results it was proposed to supplement the model of private transportation by the making advice system, as well as improving the car selection algorithm. Advice system provides the optimum choice of the car, taking into account a lot of factors. And it will also make it possible to use more efficiently the specific additional services provided by the drivers. Due to the optimization of the order handling process it becomes possible to increase the capacity of the drivers thus increasing their profits. Passenger transportation without the use of dispatching service has some weak points and they were identified. Application of the system will improve transport structure in modern conditions, and improve the transportation basing on modern operating system

  13. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  14. Comparison of Environment Impact between Conventional and Cold Chain Management System in Paprika Distribution Process

    Directory of Open Access Journals (Sweden)

    Eidelweijs A Putri

    2012-09-01

    Full Text Available Pasir Langu village in Cisarua, West Java, is the largest central production area of paprika in Indonesia. On average, for every 200 kilograms of paprika produced, there is rejection amounting to 3 kilograms. This resulted in money loss for wholesalers and wastes. In one year, this amount can be approximately 11.7 million Indonesian rupiahs. Recently, paprika wholesalers in Pasir Langu village recently are developing cold chain management system to maintain quality of paprika so that number of rejection can be reduced. The objective of this study is to compare environmental impacts between conventional and cold chain management system in paprika distribution process using Life Cycle Assessment (LCA methodology and propose Photovoltaic (PV system in paprika distribution process. The result implies that the cold chain system produces more CO2 emission compared to conventional system. However, due to the promotion of PV system, the emission would be reduced. For future research, it is necessary to reduce CO2 emission from transportation process since this process is biggest contributor of CO2 emission at whole distribution process. Keywords: LCA, environmentally friendly distribution, paprika,cold chain, PV system

  15. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  16. Distributed process control system for remote control and monitoring of the TFTR tritium systems

    International Nuclear Information System (INIS)

    Schobert, G.; Arnold, N.; Bashore, D.; Mika, R.; Oliaro, G.

    1989-01-01

    This paper reviews the progress made in the application of a commercially available distributed process control system to support the requirements established for the Tritium REmote Control And Monitoring System (TRECAMS) of the Tokamak Fusion Test REactor (TFTR). The system that will discussed was purchased from Texas (TI) Instruments Automation Controls Division), previously marketed by Rexnord Automation. It consists of three, fully redundant, distributed process controllers interfaced to over 1800 analog and digital I/O points. The operator consoles located throughout the facility are supported by four Digital Equipment Corporation (DEC) PDP-11/73 computers. The PDP-11/73's and the three process controllers communicate over a fully redundant one megabaud fiber optic network. All system functionality is based on a set of completely integrated databases loaded to the process controllers and the PDP-11/73's. (author). 2 refs.; 2 figs

  17. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  18. Estimating the transmission potential of supercritical processes based on the final size distribution of minor outbreaks.

    Science.gov (United States)

    Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J

    2012-02-07

    Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  19. A Java based environment to control and monitor distributed processing systems

    International Nuclear Information System (INIS)

    Legrand, I.C.

    1997-01-01

    Distributed processing systems are considered to solve the challenging requirements of triggering and data acquisition systems for future HEP experiments. The aim of this work is to present a software environment to control and monitor large scale parallel processing systems based on a distributed client-server approach developed in Java. One server task may control several processing nodes, switching elements or controllers for different sub-systems. Servers are designed as multi-thread applications for efficient communications with other objects. Servers communicate between themselves by using Remote Method Invocation (RMI) in a peer-to-peer mechanism. This distributed server layer has to provide a dynamic and transparent access from any client to all the resources in the system. The graphical user interface programs, which are platform independent, may be transferred to any client via the http protocol. In this scheme the control and monitor tasks are distributed among servers and network controls the flow of information among servers and clients providing a flexible mechanism for monitoring and controlling large heterogenous distributed systems. (author)

  20. Towards a New Paradigm of Software Development: an Ambassador Driven Process in Distributed Software Companies

    Science.gov (United States)

    Kumlander, Deniss

    The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.

  1. Enforcement of entailment constraints in distributed service-based business processes.

    Science.gov (United States)

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web

  2. Distributed real time data processing architecture for the TJ-II data acquisition system

    International Nuclear Information System (INIS)

    Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Vega, J.; Sanchez, E.

    2004-01-01

    This article describes the performance of a new model of architecture that has been developed for the TJ-II data acquisition system in order to increase its real time data processing capabilities. The current model consists of several compact PCI extension for instrumentation (PXI) standard chassis, each one with various digitizers. In this architecture, the data processing capability is restricted to the PXI controller's own performance. The controller must share its CPU resources between the data processing and the data acquisition tasks. In the new model, distributed data processing architecture has been developed. The solution adds one or more processing cards to each PXI chassis. This way it is possible to plan how to distribute the data processing of all acquired signals among the processing cards and the available resources of the PXI controller. This model allows scalability of the system. More or less processing cards can be added based on the requirements of the system. The processing algorithms are implemented in LabVIEW (from National Instruments), providing efficiency and time-saving application development when compared with other efficient solutions

  3. Dust charging processes with a Cairns-Tsallis distribution function with negative ions

    International Nuclear Information System (INIS)

    Abid, A. A.; Khan, M. Z.; Yap, S. L.; Terças, H.; Mahmood, S.

    2016-01-01

    Dust grain charging processes are presented in a non-Maxwellian dusty plasma following the Cairns-Tsallis (q, α)–distribution, whose constituents are the electrons, as well as the positive/negative ions and negatively charged dust grains. For this purpose, we have solved the current balance equation for a negatively charged dust grain to achieve an equilibrium state value (viz., q d  = constant) in the presence of Cairns-Tsallis (q, α)–distribution. In fact, the current balance equation becomes modified due to the Boltzmannian/streaming distributed negative ions. It is numerically found that the relevant plasma parameters, such as the spectral indexes q and α, the positive ion-to-electron temperature ratio, and the negative ion streaming speed (U 0 ) significantly affect the dust grain surface potential. It is also shown that in the limit q → 1 the Cairns-Tsallis reduces to the Cairns distribution; for α = 0 the Cairns-Tsallis distribution reduces to pure Tsallis distribution and the latter reduces to Maxwellian distribution for q → 1 and α = 0

  4. Dust charging processes with a Cairns-Tsallis distribution function with negative ions

    Energy Technology Data Exchange (ETDEWEB)

    Abid, A. A., E-mail: abidaliabid1@hotmail.com [Applied Physics Department, Federal Urdu University of Arts, Science and Technology, Islamabad Campus, Islamabad 45320 (Pakistan); Khan, M. Z., E-mail: mzk-qau@yahoo.com [Applied Physics Department, Federal Urdu University of Arts, Science and Technology, Islamabad Campus, Islamabad 45320 (Pakistan); Plasma Technology Research Center, Department of Physics, Faculty of Science, University of Malaya, Kuala Lumpur 50603 (Malaysia); Yap, S. L. [Plasma Technology Research Center, Department of Physics, Faculty of Science, University of Malaya, Kuala Lumpur 50603 (Malaysia); Terças, H., E-mail: hugo.tercas@tecnico.ul.pt [Physics of Information Group, Instituto de Telecomunicações, Av. Rovisco Pais, Lisbon 1049-001 (Portugal); Mahmood, S. [Science Place, University of Saskatchewan, Saskatoon, Saskatchewan S7N5A2 (Canada)

    2016-01-15

    Dust grain charging processes are presented in a non-Maxwellian dusty plasma following the Cairns-Tsallis (q, α)–distribution, whose constituents are the electrons, as well as the positive/negative ions and negatively charged dust grains. For this purpose, we have solved the current balance equation for a negatively charged dust grain to achieve an equilibrium state value (viz., q{sub d} = constant) in the presence of Cairns-Tsallis (q, α)–distribution. In fact, the current balance equation becomes modified due to the Boltzmannian/streaming distributed negative ions. It is numerically found that the relevant plasma parameters, such as the spectral indexes q and α, the positive ion-to-electron temperature ratio, and the negative ion streaming speed (U{sub 0}) significantly affect the dust grain surface potential. It is also shown that in the limit q → 1 the Cairns-Tsallis reduces to the Cairns distribution; for α = 0 the Cairns-Tsallis distribution reduces to pure Tsallis distribution and the latter reduces to Maxwellian distribution for q → 1 and α = 0.

  5. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  6. Distribution flow: a general process in the top layer of water repellent soils

    NARCIS (Netherlands)

    Ritsema, C.J.; Dekker, L.W.

    1995-01-01

    Distribution flow is the process of water and solute flowing in a lateral direction over and through the very first millimetre or centimetre of the soil profile. A potassium bromide tracer was applied in two water-repellent sandy soils to follow the actual flow paths of water and solutes in the

  7. Innovation as a distributed, collaborative process of knowledge generation: open, networked innovation

    NARCIS (Netherlands)

    Sloep, Peter

    2009-01-01

    Sloep, P. B. (2009). Innovation as a distributed, collaborative process of knowledge generation: open, networked innovation. In V. Hornung-Prähauser & M. Luckmann (Eds.), Kreativität und Innovationskompetenz im digitalen Netz - Creativity and Innovation Competencies in the Web, Sammlung von

  8. The constitutive distributed parameter model of multicomponent chemical processes in gas, fluid and solid phase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    In the literature of distributed parameter modelling of real processes is not considered the class of multicomponent chemical processes in gas, fluid and solid phase. The aim of paper is constitutive distributed parameter physicochemical model, constructed on kinetics and phenomenal analysis of multicomponent chemical processes in gas, fluid and solid phase. The mass, energy and momentum aspects of these multicomponent chemical reactions and adequate phenomena are utilized in balance operations, by conditions of: constitutive invariance for continuous media with space and time memories, reciprocity principle for isotropic and anisotropic nonhomogeneous media with space and time memories, application of definitions of following derivative and equation of continuity, to the construction of systems of partial differential constitutive state equations, in the following derivative forms for gas, fluid and solid phase. Couched in this way all physicochemical conditions of multicomponent chemical processes in gas, fluid and solid phase are new form of constitutive distributed parameter model for automatics and its systems of equations are new form of systems of partial differential constitutive state equations in sense of phenomenal distributed parameter control

  9. AKaplan-Meier estimators of distance distributions for spatial point processes

    NARCIS (Netherlands)

    Baddeley, A.J.; Gill, R.D.

    1997-01-01

    When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit

  10. Spatial patterns in the distribution of kimberlites: relationship to tectonic processes and lithosphere structure

    DEFF Research Database (Denmark)

    Chemia, Zurab; Artemieva, Irina; Thybo, Hans

    2015-01-01

    of kimberlite melts through the lithospheric mantle, which forms the major pipe. Stage 2 (second-order process) begins when the major pipe splits into daughter sub-pipes (tree-like pattern) at crustal depths. We apply cluster analysis to the spatial distribution of all known kimberlite fields with the goal...

  11. Spatial Patterns in Distribution of Kimberlites: Relationship to Tectonic Processes and Lithosphere Structure

    DEFF Research Database (Denmark)

    Chemia, Zurab; Artemieva, Irina; Thybo, Hans

    2014-01-01

    of kimberlite melts through the lithospheric mantle, which forms the major pipe. Stage 2 (second-order process) begins when the major pipe splits into daughter sub-pipes (tree-like pattern) at crustal depths. We apply cluster analysis to the spatial distribution of all known kimberlite fields with the goal...

  12. Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition

    Science.gov (United States)

    Rogers, Timothy T.; McClelland, James L.

    2014-01-01

    This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…

  13. Cumulative distribution functions associated with bubble-nucleation processes in cavitation

    KAUST Repository

    Watanabe, Hiroshi; Suzuki, Masaru; Ito, Nobuyasu

    2010-01-01

    of the exponential distribution associated with the process are determined by taking the relaxation time into account, and the apparent finite-size effect is removed. These results imply that the use of the arithmetic mean of the waiting time until a bubble grows

  14. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  15. Distribution and interplay of geologic processes on Titan from Cassini radar data

    Science.gov (United States)

    Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.

    2010-01-01

    The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ???350 m to ???2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30??), with no dunes being present above 60??. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30?? and 60?? north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the

  16. Strange quark distribution and parton charge symmetry violation in a semi-inclusive process

    International Nuclear Information System (INIS)

    Kitagawa, Hisashi; Sakemi, Yasuhiro

    2000-01-01

    It is possible to observe a semi-inclusive reaction with tagged charged kaons using the RICH detector at DESY-HERA. Using the semi-inclusive process we study two kinds of parton properties in the nucleon. We study relations between cross sections and strange quark distributions, which are expected to be measured more precisely in such a process than in the process in which pions are tagged. We also investigate charge symmetry violation (CSV) in the nucleon, which appears in the region x ≤ 0.1. (author)

  17. Standardization of a method to study the distribution of Americium in purex process

    International Nuclear Information System (INIS)

    Dapolikar, T.T.; Pant, D.K.; Kapur, H.N.; Kumar, Rajendra; Dubey, K.

    2017-01-01

    In the present work the distribution of Americium in PUREX process is investigated in various process streams. For this purpose a method has been standardized for the determination of Am in process samples. The method involves extraction of Am with associated actinides using 30% TRPO-NPH at 0.3M HNO 3 followed by selective stripping of Am from the organic phase into aqueous phase at 6M HNO 3 . The assay of aqueous phase for Am content is carried out by alpha radiometry. The investigation has revealed that 100% Am follows the HLLW route. (author)

  18. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...

  19. Modelling spatiotemporal distribution patterns of earthworms in order to indicate hydrological soil processes

    Science.gov (United States)

    Palm, Juliane; Klaus, Julian; van Schaik, Loes; Zehe, Erwin; Schröder, Boris

    2010-05-01

    Soils provide central ecosystem functions in recycling nutrients, detoxifying harmful chemicals as well as regulating microclimate and local hydrological processes. The internal regulation of these functions and therefore the development of healthy and fertile soils mainly depend on the functional diversity of plants and animals. Soil organisms drive essential processes such as litter decomposition, nutrient cycling, water dynamics, and soil structure formation. Disturbances by different soil management practices (e.g., soil tillage, fertilization, pesticide application) affect the distribution and abundance of soil organisms and hence influence regulating processes. The strong relationship between environmental conditions and soil organisms gives us the opportunity to link spatiotemporal distribution patterns of indicator species with the potential provision of essential soil processes on different scales. Earthworms are key organisms for soil function and affect, among other things, water dynamics and solute transport in soils. Through their burrowing activity, earthworms increase the number of macropores by building semi-permanent burrow systems. In the unsaturated zone, earthworm burrows act as preferential flow pathways and affect water infiltration, surface-, subsurface- and matrix flow as well as the transport of water and solutes into deeper soil layers. Thereby different ecological earthworm types have different importance. Deep burrowing anecic earthworm species (e.g., Lumbricus terrestris) affect the vertical flow and thus increase the risk of potential contamination of ground water with agrochemicals. In contrast, horizontal burrowing endogeic (e.g., Aporrectodea caliginosa) and epigeic species (e.g., Lumbricus rubellus) increase water conductivity and the diffuse distribution of water and solutes in the upper soil layers. The question which processes are more relevant is pivotal for soil management and risk assessment. Thus, finding relevant

  20. A distributed process monitoring system for nuclear powered electrical generating facilities

    International Nuclear Information System (INIS)

    Sweney, A.D.

    1991-01-01

    Duke Power Company is one of the largest investor owned utilities in the United States, with a service area of 20,000 square miles extending across North and South Carolina. Oconee Nuclear Station, one of Duke Power's three nuclear generating facilities, is a three unit pressurized water reactor site and has, over the course of its 15-year operating lifetime, effectively run out of plant processing capability. From a severely overcrowded cable spread room to an aging overtaxed Operator Aid Computer, the problems with trying to add additional process variables to the present centralized Operator Aid Computer are almost insurmountable obstacles. This paper reports that for this reason, and to realize the inherent benefits of a distributed process monitoring and control system, Oconee has embarked on a project to demonstrate the ability of a distributed system to perform in the nuclear power plant environment

  1. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  2. Radial transport processes as a precursor to particle deposition in drinking water distribution systems.

    Science.gov (United States)

    van Thienen, P; Vreeburg, J H G; Blokker, E J M

    2011-02-01

    Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.

  3. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  4. Comparison of the depth distribution processes for 137Cs and 210Pbex in cultivated soils

    International Nuclear Information System (INIS)

    Zhang Yunqi; Zhang Xinbao; Long Yi; He Xiubin; Yu Xingxiu

    2012-01-01

    This paper focuses on the different processes of 137 Cs and 210 Pb ex depth distribution in cultivated soils. In view of their different fallout deposition processes, considering radionuclide will diffuse from the plough layer to the plough pan layer duo to the concentration gradient between the two layers, the 137 Cs and 210 Pb ex depth distribution processes were theoretically derived. Additionally, the theoretical derivation was verified by the measured 137 Cs and 210 Pb ex values in the soil core collected from wheat field in Fujianzhuang, Shanxi Province, China, and the 137 Cs and 210 Pb ex concentrations variation with depth in soils of the wheat field was explained rationally. The 137 Cs depth distribution state in cultivated soils will consistently vary with time due to 137 Cs continual decay and diffusion as an artificial radionuclide without sustainable fallout input since 1960s. In contrast, the 210 Pb ex depth distribution in cultivated soils will achieve steady state because of sustainable deposition of the naturally occurring 210 Pb ex fallout, and it can be concluded that the differences between the theoretical and the measured values, especially for 210 Pb ex , might be associated with the history of plough depth variation or LUCC. (authors)

  5. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing

    Directory of Open Access Journals (Sweden)

    Samsinar Riza

    2018-01-01

    Full Text Available The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  6. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing)

    Science.gov (United States)

    Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi

    2018-02-01

    The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  7. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  8. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    International Nuclear Information System (INIS)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined

  9. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined.

  10. Distributed control and monitoring of high-level trigger processes on the LHCb online farm

    CERN Document Server

    Vannerem, P; Jost, B; Neufeld, N

    2003-01-01

    The on-line data taking of the LHCb experiment at the future LHC collider will be controlled by a fully integrated and distributed Experiment Control System (ECS). The ECS will supervise both the detector operation (DCS) and the trigger and data acquisition (DAQ) activities of the experiment. These tasks require a large distributed information management system. The aim of this paper is to show how the control and monitoring of software processes such as trigger algorithms are integrated in the ECS of LHCb.

  11. The Particle Distribution in Liquid Metal with Ceramic Particles Mould Filling Process

    Science.gov (United States)

    Dong, Qi; Xing, Shu-ming

    2017-09-01

    Adding ceramic particles in the plate hammer is an effective method to increase the wear resistance of the hammer. The liquid phase method is based on the “with the flow of mixed liquid forging composite preparation of ZTA ceramic particle reinforced high chromium cast iron hammer. Preparation method for this system is using CFD simulation analysis the particles distribution of flow mixing and filling process. Taking the 30% volume fraction of ZTA ceramic composite of high chromium cast iron hammer as example, by changing the speed of liquid metal viscosity to control and make reasonable predictions of particles distribution before solidification.

  12. Charged particle multiplicity distributions in e+e--annihilation processes in the LEP experiments

    International Nuclear Information System (INIS)

    Shlyapnikov, P.V.

    1992-01-01

    Results of studies of the charged particle multiplicity distributions in the process of e + e - -annihilation into hadrons obtained in experiments at LEP accelerator in CERN are reviewed. Universality in energy dependence of the average charged particle multiplicity in e + e - and p ± p collisions, evidence for KNO-scaling in e + e - data, structure in multiplicity distribution and its relation to the jet structure of events, average particle multiplicities or quark and gluon jets, 'clan' picture and other topics are discussed. 73 refs.; 20 figs.; 3 tabs

  13. Effect of process parameters on temperature distribution in twin-electrode TIG coupling arc

    Science.gov (United States)

    Zhang, Guangjun; Xiong, Jun; Gao, Hongming; Wu, Lin

    2012-10-01

    The twin-electrode TIG coupling arc is a new type of welding heat source, which is generated in a single welding torch that has two tungsten electrodes insulated from each other. This paper aims at determining the distribution of temperature for the coupling arc using the Fowler-Milne method under the assumption of local thermodynamic equilibrium. The influences of welding current, arc length, and distance between both electrode tips on temperature distribution of the coupling arc were analyzed. Based on the results, a better understanding of the twin-electrode TIG welding process was obtained.

  14. Enabling Chemistry of Gases and Aerosols for Assessment of Short-Lived Climate Forcers: Improving Solar Radiation Modeling in the DOE-ACME and CESM models

    Energy Technology Data Exchange (ETDEWEB)

    Prather, Michael [Univ. of California, Irvine, CA (United States)

    2018-01-12

    This proposal seeks to maintain the DOE-ACME (offshoot of CESM) as one of the leading CCMs to evaluate near-term climate mitigation. It will implement, test, and optimize the new UCI photolysis codes within CESM CAM5 and new CAM versions in ACME. Fast-J is a high-order-accuracy (8 stream) code for calculating solar scattering and absorption in a single column atmosphere containing clouds, aerosols, and gases that was developed at UCI and implemented in CAM5 under the previous BER/SciDAC grant.

  15. CLIC-ACM: generic modular rad-hard data acquisition system based on CERN GBT versatile link

    International Nuclear Information System (INIS)

    Bielawski, B.; Locci, F.; Magnoni, S.

    2015-01-01

    CLIC is a world-wide collaboration to study the next ''terascale'' lepton collider, relying upon a very innovative concept of two-beam-acceleration. This accelerator, currently under study, will be composed of the subsequence of 21000 two-beam-modules. Each module requires more than 300 analogue and digital signals which need to be acquired and controlled in a synchronous way. CLIC-ACM (Acquisition and Control Module) is the 'generic' control and acquisition module developed to accommodate the controls of all these signals for various sub-systems and related specification in term of data bandwidth, triggering and timing synchronization. This paper describes the system architecture with respect to its radiation-tolerance, power consumption and scalability

  16. Distribution

    Science.gov (United States)

    John R. Jones

    1985-01-01

    Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....

  17. Research on distributed optical fiber sensing data processing method based on LabVIEW

    Science.gov (United States)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  18. Development of laboratory and process sensors to monitor particle size distribution of industrial slurries

    Energy Technology Data Exchange (ETDEWEB)

    Pendse, H.P.

    1992-10-01

    In this paper we present a novel measurement technique for monitoring particle size distributions of industrial colloidal slurries based on ultrasonic spectroscopy and mathematical deconvolution. An on-line sensor prototype has been developed and tested extensively in laboratory and production settings using mineral pigment slurries. Evaluation to date shows that the sensor is capable of providing particle size distributions, without any assumptions regarding their functional form, over diameters ranging from 0.1 to 100 micrometers in slurries with particle concentrations of 10 to 50 volume percents. The newly developed on-line sensor allows one to obtain particle size distributions of commonly encountered inorganic pigment slurries under industrial processing conditions without dilution.

  19. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    Directory of Open Access Journals (Sweden)

    Tokareva Victoria

    2018-01-01

    Full Text Available New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS. Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  20. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    Science.gov (United States)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  1. Energy Consumption in the Process of Excavator-Automobile Complexes Distribution at Kuzbass Open Pit Mines

    Directory of Open Access Journals (Sweden)

    Panachev Ivan

    2017-01-01

    Full Text Available Every year worldwide coal mining companies seek to maintain the tendency of the mining machine fleet renewal. Various activities to maintain the service life of already operated mining equipment are implemented. In this regard, the urgent issue is the problem of efficient distribution of available machines in different geological conditions. The problem of “excavator-automobile” complex effective distribution occurs when heavy dump trucks are used in mining. For this reason, excavation and transportation of blasted rock mass are the most labor intensive and costly processes, considering the volume of transported overburden and coal, as well as diesel fuel, electricity, fuel and lubricants costs, consumables for repair works and downtime, etc. Currently, it is recommended to take the number of loading buckets in the range of 3 to 5, according to which the dump trucks are distributed to faces.

  2. Distribution of radioactivity in the Esk Estuary and its relationship to sedimentary processes

    International Nuclear Information System (INIS)

    Kelly, M.; Emptage, M.

    1992-01-01

    In the Esk Estuary, Cumbria, the distribution of sediment lithology and facies has been determined and related to radionuclide surface and sub-surface distribution. The total volume of sediment contaminated with artificial radionuclides is estimated at 1.2 Mm 3 and the inventory of 137 Cs at 4.5 TBq. The fine grained sediments of the bank facies are the main reservoir for radionuclides, comprising 73% of the 137 Cs inventory. Time scales for the reworking of these sediments are estimated at tens to hundreds of years. Measurements of sediment and radionuclide deposition demonstrate that direct sediment deposition is the main method for radionuclide recruitment to the deposits but solution labelling can also occur. Bioturbation and other diagenetic processes modify the distribution of radionuclides in the deposits. Gamma dose rates in air can be related to the sediment grain size and sedimentation rate. (Author)

  3. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    Science.gov (United States)

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  4. Phase distribution measurements in narrow rectangular channels using image processing techniques

    International Nuclear Information System (INIS)

    Bentley, C.; Ruggles, A.

    1991-01-01

    Many high flux research reactor fuel assemblies are cooled by systems of parallel narrow rectangular channels. The HFIR is cooled by single phase forced convection under normal operating conditions. However, two-phase forced convection or two phase mixed convection can occur in the fueled region as a result of some hypothetical accidents. Such flow conditions would occur only at decay power levels. The system pressure would be around 0.15 MPa in such circumstances. Phase distribution of air-water flow in a narrow rectangular channel is examined using image processing techniques. Ink is added to the water and clear channel walls are used to allow high speed still photographs and video tape to be taken of the air-water flow field. Flow field images are digitized and stored in a Macintosh 2ci computer using a frame grabber board. Local grey levels are related to liquid thickness in the flow channel using a calibration fixture. Image processing shareware is used to calculate the spatially averaged liquid thickness from the image of the flow field. Time averaged spatial liquid distributions are calculated using image calculation algorithms. The spatially averaged liquid distribution is calculated from the time averaged spatial liquid distribution to formulate the combined temporally and spatially averaged fraction values. The temporally and spatially averaged liquid fractions measured using this technique compare well to those predicted from pressure gradient measurements at zero superficial liquid velocity

  5. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  6. Single- versus dual-process models of lexical decision performance: insights from response time distributional analysis.

    Science.gov (United States)

    Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M

    2006-12-01

    This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.

  7. Gene tree rooting methods give distributions that mimic the coalescent process.

    Science.gov (United States)

    Tian, Yuan; Kubatko, Laura S

    2014-01-01

    Multi-locus phylogenetic inference is commonly carried out via models that incorporate the coalescent process to model the possibility that incomplete lineage sorting leads to incongruence between gene trees and the species tree. An interesting question that arises in this context is whether data "fit" the coalescent model. Previous work (Rosenfeld et al., 2012) has suggested that rooting of gene trees may account for variation in empirical data that has been previously attributed to the coalescent process. We examine this possibility using simulated data. We show that, in the case of four taxa, the distribution of gene trees observed from rooting estimated gene trees with either the molecular clock or with outgroup rooting can be closely matched by the distribution predicted by the coalescent model with specific choices of species tree branch lengths. We apply commonly-used coalescent-based methods of species tree inference to assess their performance in these situations. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Design and simulation of parallel and distributed architectures for images processing

    International Nuclear Information System (INIS)

    Pirson, Alain

    1990-01-01

    The exploitation of visual information requires special computers. The diversity of operations and the Computing power involved bring about structures founded on the concepts of concurrency and distributed processing. This work identifies a vision computer with an association of dedicated intelligent entities, exchanging messages according to the model of parallelism introduced by the language Occam. It puts forward an architecture of the 'enriched processor network' type. It consists of a classical multiprocessor structure where each node is provided with specific devices. These devices perform processing tasks as well as inter-nodes dialogues. Such an architecture benefits from the homogeneity of multiprocessor networks and the power of dedicated resources. Its implementation corresponds to that of a distributed structure, tasks being allocated to each Computing element. This approach culminates in an original architecture called ATILA. This modular structure is based on a transputer network supplied with vision dedicated co-processors and powerful communication devices. (author) [fr

  9. The effect of electrodeposition process parameters on the current density distribution in an electrochemical cell

    Directory of Open Access Journals (Sweden)

    R. M. STEVANOVIC

    2001-02-01

    Full Text Available Cell voltage – current density dependences for a model electrochemical cell of fixed geometry were calculated for different electrolyte conductivities, Tafel slopes and cathodic exchange current densities. The ratio between the current density at the part of the cathode nearest to the anode and the one furthest away were taken as a measure for the estimation of the current density distribution. The calculations reveal that increasing the conductivity of the electrolyte, as well as increasing the cathodic Tafel slope should both improve the current density distribution. Also, the distribution should be better under total activation control or total diffusion control rather than at mixed activation-diffusion-Ohmic control of the deposition process. On the contrary, changes in the exchange current density should not affect it. These results, being in agreement with common knowledge about the influence of different parameters on the current distribution in an electrochemical cell, demonstrate that a quick estimation of the current distribution can be performed by a simple comparison of the current density at the point of the cathode closest to anode with that at furthest point.

  10. TCP (truncated compound Poisson) process for multiplicity distributions in high energy collisions

    International Nuclear Information System (INIS)

    Srivastave, P.P.

    1990-01-01

    On using the Poisson distribution truncated at zero for intermediate cluster decay in a compound Poisson process, the authors obtain TCP distribution which describes quite well the multiplicity distributions in high energy collisions. A detailed comparison is made between TCP and NB for UA5 data. The reduced moments up to the fifth agree very well with the observed ones. The TCP curves are narrower than NB at high multiplicity tail, look narrower at very high energy and develop shoulders and oscillations which become increasingly pronounced as the energy grows. At lower energies the distributions, of the data for fixed intervals of rapidity for UA5 data and for the data (at low energy) for e + e - annihilation and pion-proton, proton-proton and muon-proton scattering. A discussion of compound Poisson distribution, expression of reduced moments and Poisson transforms are also given. The TCP curves and curves of the reduced moments for different values of the parameters are also presented

  11. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    Science.gov (United States)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  12. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    Science.gov (United States)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  13. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  14. Unitarity corrections in the pT distribution for the Drell-Yan process

    International Nuclear Information System (INIS)

    Betempts, M.A.; Gay Ducaty, M.B.; Machado, M.V.T.

    2001-01-01

    In this contribution we investigate the Drell-Yan transverse momentum distribution considering the color dipole approach, taking into account unitarity aspects in the dipole cross section. The process is analyzed in the current energies on pp collisions (√s = 62 GeV) and at LHC energies (√s = 8.8 TeV. The unitarity corrections are implemented through the multiple scattering Glauber-Mueller approach. (author)

  15. Distributed collaborative processing in wireless sensor networks with application to target localization and beamforming

    OpenAIRE

    Béjar Haro, Benjamín

    2013-01-01

    Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of es...

  16. Quasi-stationary distributions for structured birth and death processes with mutations

    OpenAIRE

    Collet , Pierre; Martinez , Servet; Méléard , Sylvie; San Martin , Jaime

    2009-01-01

    39 pages; We study the probabilistic evolution of a birth and death continuous time measure-valued process with mutations and ecological interactions. The individuals are characterized by (phenotypic) traits that take values in a compact metric space. Each individual can die or generate a new individual. The birth and death rates may depend on the environment through the action of the whole population. The offspring can have the same trait or can mutate to a randomly distributed trait. We ass...

  17. Conditions for the existence of quasi-stationary distributions for birth–death processes with killing

    OpenAIRE

    van Doorn, Erik A.

    2012-01-01

    We consider birth-death processes on the nonnegative integers, where $\\{1,2,...\\}$ is an irreducible class and $0$ an absorbing state, with the additional feature that a transition to state $0$ (killing) may occur from any state. Assuming that absorption at $0$ is certain we are interested in additional conditions on the transition rates for the existence of a quasi-stationary distribution. Inspired by results of M. Kolb and D. Steinsaltz (Quasilimiting behaviour for one-dimensional diffusion...

  18. State-Level Comparison of Processes and Timelines for Distributed Photovoltaic Interconnection in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Davidson, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Nobler, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    This report presents results from an analysis of distributed photovoltaic (PV) interconnection and deployment processes in the United States. Using data from more than 30,000 residential (up to 10 kilowatts) and small commercial (10-50 kilowatts) PV systems, installed from 2012 to 2014, we assess the range in project completion timelines nationally (across 87 utilities in 16 states) and in five states with active solar markets (Arizona, California, New Jersey, New York, and Colorado).

  19. Bubble size distribution analysis and control in high frequency ultrasonic cleaning processes

    International Nuclear Information System (INIS)

    Hauptmann, M; Struyf, H; Mertens, P; Heyns, M; Gendt, S De; Brems, S; Glorieux, C

    2012-01-01

    In the semiconductor industry, the ongoing down-scaling of nanoelectronic elements has lead to an increasing complexity of their fabrication. Hence, the individual fabrication processes become increasingly difficult to handle. To minimize cross-contamination, intermediate surface cleaning and preparation steps are inevitable parts of the semiconductor process chain. Here, one major challenge is the removal of residual nano-particulate contamination resulting from abrasive processes such as polishing and etching. In the past, physical cleaning techniques such as megasonic cleaning have been proposed as suitable solutions. However, the soaring fragility of the smallest structures is constraining the forces of the involved physical removal mechanisms. In the case of 'megasonic' cleaning –cleaning with ultrasound in the MHz-domain – the main cleaning action arises from strongly oscillating microbubbles which emerge from the periodically changing tensile strain in the cleaning liquid during sonication. These bubbles grow, oscillate and collapse due to a complex interplay of rectified diffusion, bubble coalescence, non-linear pulsation and the onset of shape instabilities. Hence, the resulting bubble size distribution does not remain static but alternates continuously. Only microbubbles in this distribution that show a high oscillatory response are responsible for the cleaning action. Therefore, the cleaning process efficiency can be improved by keeping the majority of bubbles around their resonance size. In this paper, we propose a method to control and characterize the bubble size distribution by means of 'pulsed' sonication and measurements of acoustic cavitation spectra, respectively. We show that the so-obtained bubble size distributions can be related to theoretical predictions of the oscillatory responses of and the onset of shape instabilities for the respective bubbles. We also propose a mechanism to explain the enhancement of both acoustic and cleaning

  20. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    Directory of Open Access Journals (Sweden)

    Armando Freitas da Rocha

    2015-01-01

    Full Text Available Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei provided by each electrode of the 10/20 system about the identified si. H(ei Principal Component Analysis (PCA was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.

  1. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    Science.gov (United States)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  2. A convergent model for distributed processing of Big Sensor Data in urban engineering networks

    Science.gov (United States)

    Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.

    2017-01-01

    The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.

  3. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Energy Technology Data Exchange (ETDEWEB)

    Senvar, O.; Sennaroglu, B.

    2016-07-01

    This study examines Clements’ Approach (CA), Box-Cox transformation (BCT), and Johnson transformation (JT) methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI) Ppu is handled for process capability analysis (PCA) because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD), which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB) and the Relative Root Mean Square Error (RRMSE) are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations... (Author)

  4. Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC

    Science.gov (United States)

    Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.

    2016-12-01

    The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud

  5. Proceeding of the ACM/IEEE-CS Joint Conference on Digital Libraries (1st, Roanoke, Virginia, June 24-28, 2001).

    Science.gov (United States)

    Association for Computing Machinery, New York, NY.

    Papers in this Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, Virginia, June 24-28, 2001) discuss: automatic genre analysis; text categorization; automated name authority control; automatic event generation; linked active content; designing e-books for legal research; metadata harvesting; mapping the…

  6. Variation behavior of residual stress distribution by manufacturing processes in welded pipes of austenitic stainless steel

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Hashimoto, Tadafumi; Mochizuki, Masahito

    2012-01-01

    Stress corrosion cracking (SCC) has been observed near heat affected zone (HAZ) of primary loop recirculation pipes made of low-carbon austenitic stainless steel type 316L in the nuclear power plants. For the non-sensitization material, residual stress is the important factor of SCC, and it is generated by machining and welding. In the actual plants, welding is conducted after machining as manufacturing processes of welded pipes. It could be considered that residual stress generated by machining is varied by welding as a posterior process. This paper presents residual stress variation due to manufacturing processes of pipes using X-ray diffraction method. Residual stress distribution due to welding after machining had a local maximum stress in HAZ. Moreover, this value was higher than residual stress generated by welding or machining. Vickers hardness also had a local maximum hardness in HAZ. In order to clarify hardness variation, crystal orientation analysis with EBSD method was performed. Recovery and recrystallization were occurred by welding heat near the weld metal. These lead hardness decrease. The local maximum region showed no microstructure evolution. In this region, machined layer was remained. Therefore, the local maximum hardness was generated at machined layer. The local maximum stress was caused by the superposition effect of residual stress distributions due to machining and welding. Moreover, these local maximum residual stress and hardness are exceeded critical value of SCC initiation. In order to clarify the effect of residual stress on SCC initiation, evaluation including manufacturing processes is important. (author)

  7. Variation in recombination frequency and distribution across eukaryotes: patterns and processes

    Science.gov (United States)

    Feulner, Philine G. D.; Johnston, Susan E.; Santure, Anna W.; Smadja, Carole M.

    2017-01-01

    Recombination, the exchange of DNA between maternal and paternal chromosomes during meiosis, is an essential feature of sexual reproduction in nearly all multicellular organisms. While the role of recombination in the evolution of sex has received theoretical and empirical attention, less is known about how recombination rate itself evolves and what influence this has on evolutionary processes within sexually reproducing organisms. Here, we explore the patterns of, and processes governing recombination in eukaryotes. We summarize patterns of variation, integrating current knowledge with an analysis of linkage map data in 353 organisms. We then discuss proximate and ultimate processes governing recombination rate variation and consider how these influence evolutionary processes. Genome-wide recombination rates (cM/Mb) can vary more than tenfold across eukaryotes, and there is large variation in the distribution of recombination events across closely related taxa, populations and individuals. We discuss how variation in rate and distribution relates to genome architecture, genetic and epigenetic mechanisms, sex, environmental perturbations and variable selective pressures. There has been great progress in determining the molecular mechanisms governing recombination, and with the continued development of new modelling and empirical approaches, there is now also great opportunity to further our understanding of how and why recombination rate varies. This article is part of the themed issue ‘Evolutionary causes and consequences of recombination rate variation in sexual organisms’. PMID:29109219

  8. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    Science.gov (United States)

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  9. Using Java for distributed computing in the Gaia satellite data processing

    Science.gov (United States)

    O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose

    2011-10-01

    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.

  10. The spatial distribution of microfabric around gravel grains: indicator of till formation processes

    Science.gov (United States)

    KalväNs, Andis; Saks, Tomas

    2010-05-01

    Till micromorphology studies in thin sections is an established tool in the field of glacial geology. Often the thin sections are inspected only visually with help of mineralogical microscope. This can lead to subjective interpretation of observed structures. More objective method used in till micromorphology is measurement of apparent microfabric, usually seen as preferred orientation of elongated sand grains. In theses studies only small fraction of elongated sand grains often confined to small area of thin section usually are measured. We present a method for automated measurement of almost all elongated sand grains across the full area of the thin section. Apparently elongated sand grains are measured using simple image analysis tools, the data are processed in a way similar to regular till fabric data and visualised as a grid of rose diagrams. The method allows to draw statistical information about spatial variation of microfabric preferred orientation and fabric strength with resolution as fine as 1 mm. Late Weichselian tills from several sites in Western Latvia were studied and large variations in fabric strength and spatial distribution were observed in macroscopically similar till units. The observed types of microfabric spatial distributions include strong, monomodal and uniform distribution; weak and highly variable in small distances distribution; consistently bimodal distribution and domain-like pattern of preferred sand grain orientation. We suggest that the method can be readily used to identify the basic deformation and sedimentation processes active during the final stages of till formation. It is understood that the microfabric orientation will be significant affected by nearby large particles. The till is highly heterogonous sediment and the source of microfabric perturbations observed in thin section might lie outside the section plane. Therefore we suggest that microfabric distribution around visible sources of perturbation - gravel grains cut

  11. Interevent Time Distribution of Renewal Point Process, Case Study: Extreme Rainfall in South Sulawesi

    Science.gov (United States)

    Sunusi, Nurtiti

    2018-03-01

    The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.

  12. How Are Distributed Groups Affected by an Imposed Structuring of their Decision-Making Process?

    DEFF Research Database (Denmark)

    Lundell, Anders Lorentz; Hertzum, Morten

    2011-01-01

    Groups often suffer from ineffective communication and decision making. This experimental study compares distributed groups solving a preference task with support from either a communication system or a system providing both communication and a structuring of the decision-making process. Results...... show that groups using the latter system spend more time solving the task, spend more of their time on solution analysis, spend less of their time on disorganized activity, and arrive at task solutions with less extreme preferences. Thus, the type of system affects the decision-making process as well...... as its outcome. Notably, the task solutions arrived at by the groups using the system that imposes a structuring of the decision-making process show limited correlation with the task solutions suggested by the system on the basis of the groups’ explicitly stated criteria. We find no differences in group...

  13. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Directory of Open Access Journals (Sweden)

    Ozlem Senvar

    2016-08-01

    Full Text Available Purpose: This study examines Clements’ Approach (CA, Box-Cox transformation (BCT, and Johnson transformation (JT methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI Ppu is handled for process capability analysis (PCA because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD, which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB and the Relative Root Mean Square Error (RRMSE are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations. Practical implications: Weibull distribution covers a wide class of non-normal processes due to its capability to yield a variety of distinct curves based on its parameters. Weibull distributions are known to have significantly different tail behaviors, which greatly affects the process capability. In quality and reliability applications, they are widely used for the analyses of failure data in order to understand how

  14. Distributed cognition and process management enabling individualized translational research: The NIH Undiagnosed Diseases Program experience

    Directory of Open Access Journals (Sweden)

    Amanda E Links

    2016-10-01

    Full Text Available The National Institutes of Health Undiagnosed Diseases Program (NIH UDP applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similarly complex problems are resolvable through process management and the distributed cognition of communities. The team therefore built the NIH UDP Integrated Collaboration System (UDPICS to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement.

  15. Distributed Cognition and Process Management Enabling Individualized Translational Research: The NIH Undiagnosed Diseases Program Experience.

    Science.gov (United States)

    Links, Amanda E; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A; Mungall, Christopher J; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F; Gahl, William A; Sincan, Murat

    2016-01-01

    The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement.

  16. Process-based distributed modeling approach for analysis of sediment dynamics in a river basin

    Directory of Open Access Journals (Sweden)

    M. A. Kabir

    2011-04-01

    Full Text Available Modeling of sediment dynamics for developing best management practices of reducing soil erosion and of sediment control has become essential for sustainable management of watersheds. Precise estimation of sediment dynamics is very important since soils are a major component of enormous environmental processes and sediment transport controls lake and river pollution extensively. Different hydrological processes govern sediment dynamics in a river basin, which are highly variable in spatial and temporal scales. This paper presents a process-based distributed modeling approach for analysis of sediment dynamics at river basin scale by integrating sediment processes (soil erosion, sediment transport and deposition with an existing process-based distributed hydrological model. In this modeling approach, the watershed is divided into an array of homogeneous grids to capture the catchment spatial heterogeneity. Hillslope and river sediment dynamic processes have been modeled separately and linked to each other consistently. Water flow and sediment transport at different land grids and river nodes are modeled using one dimensional kinematic wave approximation of Saint-Venant equations. The mechanics of sediment dynamics are integrated into the model using representative physical equations after a comprehensive review. The model has been tested on river basins in two different hydro climatic areas, the Abukuma River Basin, Japan and Latrobe River Basin, Australia. Sediment transport and deposition are modeled using Govers transport capacity equation. All spatial datasets, such as, Digital Elevation Model (DEM, land use and soil classification data, etc., have been prepared using raster "Geographic Information System (GIS" tools. The results of relevant statistical checks (Nash-Sutcliffe efficiency and R–squared value indicate that the model simulates basin hydrology and its associated sediment dynamics reasonably well. This paper presents the

  17. Immature osteoblastic MG63 cells possess two calcitonin gene-related peptide receptor subtypes that respond differently to [Cys(Acm)(2,7)] calcitonin gene-related peptide and CGRP(8-37).

    Science.gov (United States)

    Kawase, Tomoyuki; Okuda, Kazuhiro; Burns, Douglas M

    2005-10-01

    Calcitonin gene-related peptide (CGRP) is clearly an anabolic factor in skeletal tissue, but the distribution of CGRP receptor (CGRPR) subtypes in osteoblastic cells is poorly understood. We previously demonstrated that the CGRPR expressed in osteoblastic MG63 cells does not match exactly the known characteristics of the classic subtype 1 receptor (CGRPR1). The aim of the present study was to further characterize the MG63 CGRPR using a selective agonist of the putative CGRPR2, [Cys(Acm)(2,7)]CGRP, and a relatively specific antagonist of CGRPR1, CGRP(8-37). [Cys(Acm)(2,7)]CGRP acted as a significant agonist only upon ERK dephosphorylation, whereas this analog effectively antagonized CGRP-induced cAMP production and phosphorylation of cAMP response element-binding protein (CREB) and p38 MAPK. Although it had no agonistic action when used alone, CGRP(8-37) potently blocked CGRP actions on cAMP, CREB, and p38 MAPK but had less of an effect on ERK. Schild plot analysis of the latter data revealed that the apparent pA2 value for ERK is clearly distinguishable from those of the other three plots as judged using the 95% confidence intervals. Additional assays using 3-isobutyl-1-methylxanthine or the PKA inhibitor N-(2-[p-bromocinnamylamino]ethyl)-5-isoquinolinesulfonamide hydrochloride (H-89) indicated that the cAMP-dependent pathway was predominantly responsible for CREB phosphorylation, partially involved in ERK dephosphorylation, and not involved in p38 MAPK phosphorylation. Considering previous data from Scatchard analysis of [125I]CGRP binding in connection with these results, these findings suggest that MG63 cells possess two functionally distinct CGRPR subtypes that show almost identical affinity for CGRP but different sensitivity to CGRP analogs: one is best characterized as a variation of CGRPR1, and the second may be a novel variant of CGRPR2.

  18. Real World Awareness in Distributed Organizations: A View on Informal Processes

    Directory of Open Access Journals (Sweden)

    Eldar Sultanow

    2011-06-01

    Full Text Available Geographically distributed development has consistently had to deal with the challenge of intense awareness extensively more than locally concentrated development. Awareness marks the state of being informed incorporated with an understanding of project-related activities, states or relationships of each individual employee within a given group as a whole. In multifarious offices, where social interaction is necessary in order to distribute and locate information together with experts, awareness becomes a concurrent process which amplifies the exigency of easy routes for staff to be able to access this information, deferred or decentralized, in a formalized and problem-oriented way. Although the subject of Awareness has immensely increased in importance, there is extensive disagreement about how this transparency can be conceptually and technically implemented [1]. This paper introduces a model in order to visualize and navigate this information in three tiers using semantic networks, GIS and Web3D.

  19. Analysis the Transient Process of Wind Power Resources when there are Voltage Sags in Distribution Grid

    Science.gov (United States)

    Nhu Y, Do

    2018-03-01

    Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.

  20. A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)

    Science.gov (United States)

    Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.

    2007-01-01

    This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.

  1. Effect of process parameters on temperature distribution in twin-electrode TIG coupling arc

    International Nuclear Information System (INIS)

    Zhang, Guangjun; Xiong, Jun; Gao, Hongming; Wu, Lin

    2012-01-01

    The twin-electrode TIG coupling arc is a new type of welding heat source, which is generated in a single welding torch that has two tungsten electrodes insulated from each other. This paper aims at determining the distribution of temperature for the coupling arc using the Fowler–Milne method under the assumption of local thermodynamic equilibrium. The influences of welding current, arc length, and distance between both electrode tips on temperature distribution of the coupling arc were analyzed. Based on the results, a better understanding of the twin-electrode TIG welding process was obtained. -- Highlights: ► Increasing arc current will increase the coupling arc temperature. ► Arc length seldom affects the peak temperature of the coupling arc. ► Increasing arc length will increase the extension of temperature near the anode. ► Increasing distance will decrease temperatures in the central part of the arc.

  2. Secured Session-key Distribution using control Vector Encryption / Decryption Process

    International Nuclear Information System (INIS)

    Ismail Jabiullah, M.; Abdullah Al-Shamim; Khaleqdad Khan, ANM; Lutfar Rahman, M.

    2006-01-01

    Frequent key changes are very much desirable for the secret communications and are thus in high demand. A session-key distribution technique has been designed and implemented using the programming language C on which the communication between the end-users is encrypted is used for the duration of a logical connection. Each session-key is obtained from the key distribution center (KDC) over the same networking facilities used for end-user communication. The control vector is cryptographically coupled with the session-key at the time of key generation in the KDC. For this, the generated hash function, master key and the session-key are used for producing the encrypted session-key, which has to be transferred. All the operations have been performed using the C programming language. This process can be widely applicable to all sorts of electronic transactions online or offline; commercially and academically.(authors)

  3. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  4. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  5. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  6. Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes

    Science.gov (United States)

    Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd

    2018-05-01

    In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.

  7. Hydraulic experimental investigation on spatial distribution and formation process of tsunami deposit on a slope

    Science.gov (United States)

    Harada, K.; Takahashi, T.; Yamamoto, A.; Sakuraba, M.; Nojima, K.

    2017-12-01

    An important aim of the study of tsunami deposits is to estimate the characteristics of past tsunamis from the tsunami deposits found locally. Based on the tsunami characteristics estimated from tsunami deposit, it is possible to examine tsunami risk assessment in coastal areas. It is considered that tsunami deposits are formed based on the dynamic correlation between tsunami's hydraulic values, sediment particle size, topography, etc. However, it is currently not enough to evaluate the characteristics of tsunamis from tsunami deposits. This is considered to be one of the reasons that the understanding of the formation process of tsunami deposits is not sufficiently understood. In this study, we analyze the measurement results of hydraulic experiment (Yamamoto et al., 2016) and focus on the formation process and distribution of tsunami deposits. Hydraulic experiment was conducted with two-dimensional water channel with a slope. Tsunami was inputted as a bore wave flow. The moving floor section was installed as a seabed slope connecting to shoreline and grain size distribution was set some cases. The water level was measured using ultrasonic displacement gauges, and the flow velocity was measured using propeller current meters and an electromagnetic current meter. The water level and flow velocity was measured at some points. The distribution of tsunami deposit was measured from shoreline to run-up limit on the slope. Yamamoto et al. (2016) reported the measurement results on the distribution of tsunami deposit with wave height and sand grain size. Therefore, in this study, hydraulic analysis of tsunami sediment formation process was examined based on the measurement data. Time series fluctuation of hydraulic parameters such as Froude number, Shields number, Rouse number etc. was calculated to understand on the formation process of tsunami deposit. In the front part of the tsunami, the flow velocity take strong flow from shoreline to around the middle of slope. From

  8. USL NASA/RECON project presentations at the 1985 ACM Computer Science Conference: Abstracts and visuals

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Chum, Frank Y.; Gallagher, Suzy; Granier, Martin; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros

    1985-01-01

    This Working Paper Series entry represents the abstracts and visuals associated with presentations delivered by six USL NASA/RECON research team members at the above named conference. The presentations highlight various aspects of NASA contract activities pursued by the participants as they relate to individual research projects. The titles of the six presentations are as follows: (1) The Specification and Design of a Distributed Workstation; (2) An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval; (3) Critical Comparative Analysis of the Major Commercial IS and R Systems; (4) Design Criteria for a PC-Based Common User Interface to Remote Information Systems; (5) The Design of an Object-Oriented Graphics Interface; and (6) Knowledge-Based Information Retrieval: Techniques and Applications.

  9. Number size distribution of fine and ultrafine fume particles from various welding processes.

    Science.gov (United States)

    Brand, Peter; Lenz, Klaus; Reisgen, Uwe; Kraus, Thomas

    2013-04-01

    Studies in the field of environmental epidemiology indicate that for the adverse effect of inhaled particles not only particle mass is crucial but also particle size is. Ultrafine particles with diameters below 100 nm are of special interest since these particles have high surface area to mass ratio and have properties which differ from those of larger particles. In this paper, particle size distributions of various welding and joining techniques were measured close to the welding process using a fast mobility particle sizer (FMPS). It turned out that welding processes with high mass emission rates (manual metal arc welding, metal active gas welding, metal inert gas welding, metal inert gas soldering, and laser welding) show mainly agglomerated particles with diameters above 100 nm and only few particles in the size range below 50 nm (10 to 15%). Welding processes with low mass emission rates (tungsten inert gas welding and resistance spot welding) emit predominantly ultrafine particles with diameters well below 100 nm. This finding can be explained by considerably faster agglomeration processes in welding processes with high mass emission rates. Although mass emission is low for tungsten inert gas welding and resistance spot welding, due to the low particle size of the fume, these processes cannot be labeled as toxicologically irrelevant and should be further investigated.

  10. The distributed neural system for top-down letter processing: an fMRI study

    Science.gov (United States)

    Liu, Jiangang; Feng, Lu; Li, Ling; Tian, Jie

    2011-03-01

    This fMRI study used Psychophysiological interaction (PPI) to investigate top-down letter processing with an illusory letter detection task. After an initial training that became increasingly difficult, participant was instructed to detect a letter from pure noise images where there was actually no letter. Such experimental paradigm allowed for isolating top-down components of letter processing and minimizing the influence of bottom-up perceptual input. A distributed cortical network of top-down letter processing was identified by analyzing the functional connectivity patterns of letter-preferential area (LA) within the left fusiform gyrus. Such network extends from the visual cortex to high level cognitive cortexes, including the left middle frontal gyrus, left medial frontal gyrus, left superior parietal gyrus, bilateral precuneus, and left inferior occipital gyrus. These findings suggest that top-down letter processing contains not only regions for processing of letter phonology and appearance, but also those involved in internal information generation and maintenance, and attention and memory processing.

  11. AcmA of Lactococcus lactis, a cell-binding major autolysin

    NARCIS (Netherlands)

    Buist, Girbe

    1997-01-01

    onsidering the amount of daily consumed foods which are produced by means of fermentation, such as breads, wines, beers, cheeses, fermented vegetables/fruits and sausages, the economical importance of these biotechnological processes can be hardly overestimated. Lactic acid bacteria (LAB) play an

  12. Fair fund distribution for a municipal incinerator using GIS-based fuzzy analytic hierarchy process.

    Science.gov (United States)

    Chang, Ni-Bin; Chang, Ying-Hsi; Chen, Ho-Wen

    2009-01-01

    Burning municipal solid waste (MSW) can generate energy and reduce the waste volume, which delivers benefits to society through resources conservation. But current practices by society are not sustainable because the associated environmental impacts of waste incineration on urbanized regions have been a long-standing concern in local communities. Public reluctance with regard to accepting the incinerators as typical utilities often results in an intensive debate concerning how much welfare is lost for those residents living in the vicinity of those incinerators. As the measure of welfare change with respect to environmental quality constraints nearby these incinerators remains critical, new arguments related to how to allocate the fair fund among affected communities became a focal point in environmental management. Given the fact that most County fair fund rules allow a great deal of flexibility for redistribution, little is known about what type of methodology may be a good fit to determine the distribution of such a fair fund under uncertainty. This paper purports to demonstrate a system-based approach that helps any fair fund distribution, which is made with respect to residents' possible claim for fair damages due to the installation of a new incinerator. Holding a case study using integrated geographic information system (GIS) and fuzzy analytic hierarchy process (FAHP) for finding out the most appropriate distribution strategy between two neighboring towns in Taipei County, Taiwan demonstrates the application potential. Participants in determining the use of a fair fund also follow a highly democratic procedure where all stakeholders involved eventually express a high level of satisfaction with the results facilitating the final decision making process. It ensures that plans for the distribution of such a fair fund were carefully thought out and justified with a multi-faceted nature that covers political, socio-economic, technical, environmental, public

  13. DEVELOPMENT OF A HETEROGENIC DISTRIBUTED ENVIRONMENT FOR SPATIAL DATA PROCESSING USING CLOUD TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. S. Garov

    2016-06-01

    Full Text Available We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  14. Mammal Distribution in Nunavut: Inuit Harvest Data and COSEWIC's Species at Risk Assessment Process

    Directory of Open Access Journals (Sweden)

    Karen A. Kowalchuk

    2012-09-01

    Full Text Available The Committee on the Status of Endangered Wildlife in Canada (COSEWIC assesses risk potential for a species by evaluating the best available information from all knowledge sources including Aboriginal traditional knowledge (ATK. Effective application of ATK in this process has been challenging. Inuit knowledge (IK of mammal distribution in Nunavut is reflected, in part, in the harvest spatial data from two comprehensive studies: the Use and Occupancy Mapping (UOM Study conducted by the Nunavut Planning Commission (NPC and the Nunavut Wildlife Harvest Study (WHS conducted by the Nunavut Wildlife Management Board (NWMB. The geographic range values of extent of occurrence (EO and area of occupancy (AO were derived from the harvest data for a selected group of mammals and applied to Phase I of the COSEWIC assessment process. Values falling below threshold values can trigger a potential risk designation of either endangered (EN or threatened (TH for the species being assessed. The IK values and status designations were compared with available COSEWIC data. There was little congruency between the two sets of data. We conclude that there are major challenges within the risk assessment process and specifically the calculation of AO that contributed to the disparity in results. Nonetheless, this application illustrated that Inuit harvest data in Nunavut represents a unique and substantial source of ATK that should be used to enrich the knowledge base on arctic mammal distribution and enhance wildlife management and conservation planning.

  15. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    Science.gov (United States)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  16. Wear Process Analysis of the Polytetrafluoroethylene/Kevlar Twill Fabric Based on the Components’ Distribution Characteristics

    Directory of Open Access Journals (Sweden)

    Gu Dapeng

    2017-12-01

    Full Text Available Polytetrafluoroethylene (PTFE/Kevlar fabric or fabric composites with excellent tribological properties have been considered as important materials used in bearings and bushing, for years. The components’ (PTFE, Kevlar, and the gap between PTFE and Kevlar distribution of the PTFE/Kevlar fabric is uneven due to the textile structure controlling the wear process and behavior. The components’ area ratio on the worn surface varying with the wear depth was analyzed not only by the wear experiment, but also by the theoretical calculations with our previous wear geometry model. The wear process and behavior of the PTFE/Kevlar twill fabric were investigated under dry sliding conditions against AISI 1045 steel by using a ring-on-plate tribometer. The morphologies of the worn surface were observed by the confocal laser scanning microscopy (CLSM. The wear process of the PTFE/Kevlar twill fabric was divided into five layers according to the distribution characteristics of Kevlar. It showed that the friction coefficients and wear rates changed with the wear depth, the order of the antiwear performance of the previous three layers was Layer III>Layer II>Layer I due to the area ratio variation of PTFE and Kevlar with the wear depth.

  17. Acquiring and processing verb argument structure: distributional learning in a miniature language.

    Science.gov (United States)

    Wonnacott, Elizabeth; Newport, Elissa L; Tanenhaus, Michael K

    2008-05-01

    Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.

  18. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    Science.gov (United States)

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (geoportal"target="_blank">http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  19. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    Science.gov (United States)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the

  20. On the estimation of the spherical contact distribution Hs(y) for spatial point processes

    International Nuclear Information System (INIS)

    Doguwa, S.I.

    1990-08-01

    RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab

  1. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  2. MALDI imaging facilitates new topical drug development process by determining quantitative skin distribution profiles.

    Science.gov (United States)

    Bonnel, David; Legouffe, Raphaël; Eriksson, André H; Mortensen, Rasmus W; Pamelard, Fabien; Stauber, Jonathan; Nielsen, Kim T

    2018-04-01

    Generation of skin distribution profiles and reliable determination of drug molecule concentration in the target region are crucial during the development process of topical products for treatment of skin diseases like psoriasis and atopic dermatitis. Imaging techniques like mass spectrometric imaging (MSI) offer sufficient spatial resolution to generate meaningful distribution profiles of a drug molecule across a skin section. In this study, we use matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI) to generate quantitative skin distribution profiles based on tissue extinction coefficient (TEC) determinations of four different molecules in cross sections of human skin explants after topical administration. The four drug molecules: roflumilast, tofacitinib, ruxolitinib, and LEO 29102 have different physicochemical properties. In addition, tofacitinib was administrated in two different formulations. The study reveals that with MALDI-MSI, we were able to observe differences in penetration profiles for both the four drug molecules and the two formulations and thereby demonstrate its applicability as a screening tool when developing a topical drug product. Furthermore, the study reveals that the sensitivity of the MALDI-MSI techniques appears to be inversely correlated to the drug molecules' ability to bind to the surrounding tissues, which can be estimated by their Log D values. Graphical abstract.

  3. MODELING OF WATER DISTRIBUTION SYSTEM PARAMETERS AND THEIR PARTICULAR IMPORTANCE IN ENVIRONMENT ENGINEERING PROCESSES

    Directory of Open Access Journals (Sweden)

    Agnieszka Trębicka

    2016-05-01

    Full Text Available The object of this study is to present a mathematical model of water-supply network and the analysis of basic parameters of water distribution system with a digital model. The reference area is Kleosin village, municipality Juchnowiec Kościelny in podlaskie province, located at the border with Białystok. The study focused on the significance of every change related to the quality and quantity of water delivered to WDS through modeling the basic parameters of water distribution system in different variants of work in order to specify new, more rational ways of exploitation (decrease in pressure value and to define conditions for development and modernization of the water-supply network, with special analysis of the scheme, in frames of specification of the most dangerous places in the network. The analyzed processes are based on copying and developing the existing state of water distribution sub-system (the WDS with the use of mathematical modeling that includes the newest accessible computer techniques.

  4. WAITING TIME DISTRIBUTION OF SOLAR ENERGETIC PARTICLE EVENTS MODELED WITH A NON-STATIONARY POISSON PROCESS

    International Nuclear Information System (INIS)

    Li, C.; Su, W.; Fang, C.; Zhong, S. J.; Wang, L.

    2014-01-01

    We present a study of the waiting time distributions (WTDs) of solar energetic particle (SEP) events observed with the spacecraft WIND and GOES. The WTDs of both solar electron events (SEEs) and solar proton events (SPEs) display a power-law tail of ∼Δt –γ . The SEEs display a broken power-law WTD. The power-law index is γ 1 = 0.99 for the short waiting times (<70 hr) and γ 2 = 1.92 for large waiting times (>100 hr). The break of the WTD of SEEs is probably due to the modulation of the corotating interaction regions. The power-law index, γ ∼ 1.82, is derived for the WTD of the SPEs which is consistent with the WTD of type II radio bursts, indicating a close relationship between the shock wave and the production of energetic protons. The WTDs of SEP events can be modeled with a non-stationary Poisson process, which was proposed to understand the waiting time statistics of solar flares. We generalize the method and find that, if the SEP event rate λ = 1/Δt varies as the time distribution of event rate f(λ) = Aλ –α exp (– βλ), the time-dependent Poisson distribution can produce a power-law tail WTD of ∼Δt α –3 , where 0 ≤ α < 2

  5. New model of Brazilian electric sector: implications of sugarcane bagasse on the distributed generation process

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Celso E.L. de; Rabi, Jose A. [Universidade de Sao Paulo (GREEN/FZEA/USP), Pirassununga, SP (Brazil). Fac. de Zootecnia e Engenharia de Alimentos. Grupo de Pesquisa em Reciclagem, Eficiencia Energetica e Simulacao Numerica], Emails: celsooli@usp.br, jrabi@usp.br; Halmeman, Maria Cristina [Universidade Estadual Paulista (FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas

    2008-07-01

    Distributed generation has become an alternative for the lack of resources to large energy projects and for recent facts that have changed the geopolitical panorama. The later have increased oil prices so that unconventional sources have become more and more feasible, which is an issue usually discussed in Europe and in USA. Brazil has followed such world trend by restructuring the electrical sector as well as major related institutions, from generation to commercialization and sector regulation while local legislation has enabled the increase of distributed generation. It regulates the role of the independent energy producer so as to provide direct business between the later and a great consumer, which is an essential step to enlarge energy market. Sugarcane bagasse has been used to produce both electric energy and steam and this paper analyzes and discusses the major implications of a new model for Brazilian electric sector based on sugarcane bagasse use as means to increase distributed generation process, particularly concerned with the commercialization of energy excess. (author)

  6. Distributed Arithmetic for Efficient Base-Band Processing in Real-Time GNSS Software Receivers

    Directory of Open Access Journals (Sweden)

    Grégoire Waelchli

    2010-01-01

    Full Text Available The growing market of GNSS capable mobile devices is driving the interest of GNSS software solutions, as they can share many system resources (processor, memory, reducing both the size and the cost of their integration. Indeed, with the increasing performance of modern processors, it becomes now feasible to implement in software a multichannel GNSS receiver operating in real time. However, a major issue with this approach is the large computing resources required for the base-band processing, in particular for the correlation operations. Therefore, new algorithms need to be developed in order to reduce the overall complexity of the receiver architecture. Towards that aim, this paper first introduces the challenges of the software implementation of a GPS receiver, with a main focus given to the base-band processing and correlation operations. It then describes the already existing solutions and, from this, introduces a new algorithm based on distributed arithmetic.

  7. Impact of process parameters and design options on heat leaks of straight cryogenic distribution lines

    Directory of Open Access Journals (Sweden)

    P. Duda

    2017-03-01

    Full Text Available The Future Circular Collider (FCC accelerator will require a helium distribution system that will exceed the presently exploited transfer lines by almost 1 order of magnitude. The helium transfer line will contain five process pipes protected against heat leaks by a common thermal shield. The design pressure of the FCC process pipe with supercritical helium will be equal to 5.0 MPa, significantly exceeding the 2.0 MPa value in the present, state-of–art transfer lines. The increase of the design pressure requires construction changes to be introduced to the support system, the vacuum barriers and the compensation bellows. This will influence heat flows to the helium. The paper analyses the impact of the increased design pressure on the heat flow. The paper also offers a discussion of the design modifications to the compensation system, including the replacement of stainless steel with Invar®—aimed at mitigating the pressure increase.

  8. Further improvement in ABWR (part-4) open distributed plant process computer system

    International Nuclear Information System (INIS)

    Makino, Shigenori; Hatori, Yoshinori

    1999-01-01

    In the nuclear industry of Japan, the electric power companies have promoted the plant process computer (PPC) technology of nuclear power plant (NPP). When PPC was introduced to NPP for the first time, because of very tight requirement such as high reliability, high speed processing, the large-scale customized computer was applied. As for recent computer field, the large market of computer contributes to the remarkable progress of engineering work station (EWS) and personal computer (PC) technology. Moreover because the data transmission technology has been progressing at the same time, world wide computer network has been established. Thanks to progress of both technologies, the distributed computer system has been established at reasonable price. So Tokyo Electric Power Company (TEPCO) is trying to apply it for PPC of NPP. (author)

  9. Impact of process parameters and design options on heat leaks of straight cryogenic distribution lines

    CERN Document Server

    Duda, Pawel; Chorowski, Maciej Pawel; Polinski, J

    2017-01-01

    The Future Circular Collider (FCC) accelerator will require a helium distribution system that will exceed the presently exploited transfer lines by almost 1 order of magnitude. The helium transfer line will contain five process pipes protected against heat leaks by a common thermal shield. The design pressure of the FCC process pipe with supercritical helium will be equal to 5.0 MPa, significantly exceeding the 2.0 MPa value in the present, state-of–art transfer lines. The increase of the design pressure requires construction changes to be introduced to the support system, the vacuum barriers and the compensation bellows. This will influence heat flows to the helium. The paper analyses the impact of the increased design pressure on the heat flow. The paper also offers a discussion of the design modifications to the compensation system, including the replacement of stainless steel with Invar—aimed at mitigating the pressure increase.

  10. GLN standard as a facilitator of physical location identification within process of distribution

    Directory of Open Access Journals (Sweden)

    Davor Dujak

    2017-09-01

    Full Text Available Background: Distribution, from the business point of view, is a set of decisions and actions that will provide the right products at the right time and place, in line with customer expectations. It is a process that generates significant cost, but also effectively implemented, significantly affects the positive perception of the company. Institute of Logistics and Warehousing (IliM, based on the research results related to the optimization of the distribution network and consulting projects for companies, indicates the high importance of the correct description of the physical location within the supply chains in order to make transport processes more effective. Individual companies work on their own geocoding of warehouse locations and location of their business partners (suppliers, customers, but the lack of standardization in this area causes delays related to delivery problems with reaching the right destination. Furthermore, the cooperating companies do not have a precise indication of the operating conditions of each location, e.g. Time windows of the plant, logistic units accepted at parties, supported transport etc. Lack of this information generates additional costs associated with re-operation and the costs of lost benefits for the lack of goods on time. The solution to this problem seems to be a wide-scale implementation of GS1 standard which is the Global Location Number (GLN, that, thanks to a broad base of information will assist the distribution processes. Material and methods: The results of survey conducted among Polish companies in the second half of 2016 indicate an unsatisfactory degree of implementation of the transport processes, resulting from incorrect or inaccurate description of the location, and thus, a significant number of errors in deliveries. Accordingly, authors studied literature and examined case studies indicating the possibility of using GLN standard to identify the physical location and to show the

  11. Human factors in computing systems: focus on patient-centered health communication at the ACM SIGCHI conference.

    Science.gov (United States)

    Wilcox, Lauren; Patel, Rupa; Chen, Yunan; Shachak, Aviv

    2013-12-01

    Health Information Technologies, such as electronic health records (EHR) and secure messaging, have already transformed interactions among patients and clinicians. In addition, technologies supporting asynchronous communication outside of clinical encounters, such as email, SMS, and patient portals, are being increasingly used for follow-up, education, and data reporting. Meanwhile, patients are increasingly adopting personal tools to track various aspects of health status and therapeutic progress, wishing to review these data with clinicians during consultations. These issues have drawn increasing interest from the human-computer interaction (HCI) community, with special focus on critical challenges in patient-centered interactions and design opportunities that can address these challenges. We saw this community presenting and interacting at the ACM SIGCHI 2013, Conference on Human Factors in Computing Systems, (also known as CHI), held April 27-May 2nd, 2013 at the Palais de Congrès de Paris in France. CHI 2013 featured many formal avenues to pursue patient-centered health communication: a well-attended workshop, tracks of original research, and a lively panel discussion. In this report, we highlight these events and the main themes we identified. We hope that it will help bring the health care communication and the HCI communities closer together. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. A multicopper oxidase is essential for manganese oxidation and laccase-like activity in Pedomicrobium sp. ACM 3067.

    Science.gov (United States)

    Ridge, Justin P; Lin, Marianne; Larsen, Eloise I; Fegan, Mark; McEwan, Alastair G; Sly, Lindsay I

    2007-04-01

    Pedomicrobium sp. ACM 3067 is a budding-hyphal bacterium belonging to the alpha-Proteobacteria which is able to oxidize soluble Mn2+ to insoluble manganese oxide. A cosmid, from a whole-genome library, containing the putative genes responsible for manganese oxidation was identified and a primer-walking approach yielded 4350 bp of novel sequence. Analysis of this sequence showed the presence of a predicted three-gene operon, moxCBA. The moxA gene product showed homology to multicopper oxidases (MCOs) and contained the characteristic four copper-binding motifs (A, B, C and D) common to MCOs. An insertion mutation of moxA showed that this gene was essential for both manganese oxidation and laccase-like activity. The moxB gene product showed homology to a family of outer membrane proteins which are essential for Type I secretion in Gram-negative bacteria. moxBA has not been observed in other manganese-oxidizing bacteria but homologues were identified in the genomes of several bacteria including Sinorhizobium meliloti 1021 and Agrobacterium tumefaciens C58. These results suggest that moxBA and its homologues constitute a family of genes encoding an MCO and a predicted component of the Type I secretion system.

  13. Process and device for determining the spatial distribution of a radioactive substance

    International Nuclear Information System (INIS)

    1977-01-01

    This invention describes a process for determining the spatial distribution of a radioactive substance consisting in determining the positions and energy losses associated to the interactions of the Compton effect and the photoelectric interactions that occur owing to the emission of gamma photons by the radioactive material and in deducing an information on the spatial distribution of the radioactive substance, depending on the positions and energy losses associated to the interactions of the Compton effect of these gamma photons and the positions and energy losses associated to the subsequent photoelectric interactions of these same photons. The invention also concerns a processing system for identifying, among the signals representing the positions and energy losses of the interactions of the Compton effect and the photoelectric interactions of the gamma photons emitted by a radioactive source, those signals that are in keeping with the gamma photons that have been subjected to an initial interaction of the Compton effect and a second and last photoelectric interaction. It further concerns a system for determining, among the identified signals, the positions of the sources of several gamma photons. This detector of Compton interaction can be used with conventional Auger-type imaging system (gamma camera) for detecting photoelectric interactions [fr

  14. Effects of distribution function nonequilibrium tails on relaxation and transfer processes in rarefied gases

    International Nuclear Information System (INIS)

    Grigoryev, Yu.N.; Mikhalitsyn, A.N.; Yanenko, N.N.

    1984-01-01

    Quantitative characteristics of the nonmonotone relaxation process are studied in a gas of pseudo-Maxwell molecules. Basic results are obtained by a direct numerical integration of the nonlinear Boltzmann equation. The evolution of initial distributions being finite or having exponential asymptotics of tails was researched. In particular, initial data obtained by selective excitation (absorption) against the Maxwell background encountered in laser physics problems have been considered. It is shown that under conditions of a developed effect of nonmonotone relaxation the overpopulation in the velocity range 4 <= upsilon <= 10 exceeds on the average 2-3 times the equilibrium value. For the given particles energy the excitation is preserved during t = 5/6 and the total relaxation time of the overpopulation wave reaches t asymptotically equals 20. The amplitudes and the relaxation time of overpopulation in the ''cupola'' region of distribution are substantially lower than in the case of a developed effect in the tail. The influence of the effect on the kinetics of threshold chemical reaction is studied. From the results it follows that in the process of nonmonotone relaxation the mean rates of binary threshold reactions can exceed more than twice the equilibrium values. This estimate is valid for all power like intermolecular repulsive potentials from the pseudo-Maxwell model up to rigid spheres. Time intervals over which the mean reaction rate exceeds considerably the equilibrium one make from 5 to 15 mean free path times increasing with the decrease in the potential ''rigidity''. (author)

  15. Process and installation for producing tomographic images of the distribution of a radiotracer

    International Nuclear Information System (INIS)

    Fonroget, Jacques; Brunol, Jean.

    1977-01-01

    The invention particularly concerns a process for obtaining tomographic images of an object formed by a radiotracer distributed spacially over three dimensions. This process, using a detection device with an appreciably plane detection surface and at least one collimation orifice provided in a partition between the detection surface and the object, enables tomographic sections to be obtained with an excellent three-dimensional resolution of the images achieved. It is employed to advantage in an installation that includes a detection device or gamma camera on an appreciably plane surface, a device having a series of collimation apertures which may be used in succession, these holes being appreciably distributed over a common plane parallel to the detection surface, and a holder for the object. This holder can be moved in appreciably parallel translation to the common plane. The aim of this invention is, inter alia, to meet two requirements: localization in space and obtaining good contrasts. This aim is achieved by the fact that at least one tomographic image is obtained from a series of intermediate images of the object [fr

  16. Online learning of a Dirichlet process mixture of Beta-Liouville distributions via variational inference.

    Science.gov (United States)

    Fan, Wentao; Bouguila, Nizar

    2013-11-01

    A large class of problems can be formulated in terms of the clustering process. Mixture models are an increasingly important tool in statistical pattern recognition and for analyzing and clustering complex data. Two challenging aspects that should be addressed when considering mixture models are how to choose between a set of plausible models and how to estimate the model's parameters. In this paper, we address both problems simultaneously within a unified online nonparametric Bayesian framework that we develop to learn a Dirichlet process mixture of Beta-Liouville distributions (i.e., an infinite Beta-Liouville mixture model). The proposed infinite model is used for the online modeling and clustering of proportional data for which the Beta-Liouville mixture has been shown to be effective. We propose a principled approach for approximating the intractable model's posterior distribution by a tractable one-which we develop-such that all the involved mixture's parameters can be estimated simultaneously and effectively in a closed form. This is done through variational inference that enjoys important advantages, such as handling of unobserved attributes and preventing under or overfitting; we explain that in detail. The effectiveness of the proposed work is evaluated on three challenging real applications, namely facial expression recognition, behavior modeling and recognition, and dynamic textures clustering.

  17. Evaluation of Future Internet Technologies for Processing and Distribution of Satellite Imagery

    Science.gov (United States)

    Becedas, J.; Perez, R.; Gonzalez, G.; Alvarez, J.; Garcia, F.; Maldonado, F.; Sucari, A.; Garcia, J.

    2015-04-01

    Satellite imagery data centres are designed to operate a defined number of satellites. For instance, difficulties when new satellites have to be incorporated in the system appear. This occurs because traditional infrastructures are neither flexible nor scalable. With the appearance of Future Internet technologies new solutions can be provided to manage large and variable amounts of data on demand. These technologies optimize resources and facilitate the appearance of new applications and services in the traditional Earth Observation (EO) market. The use of Future Internet technologies for the EO sector were validated with the GEO-Cloud experiment, part of the Fed4FIRE FP7 European project. This work presents the final results of the project, in which a constellation of satellites records the whole Earth surface on a daily basis. The satellite imagery is downloaded into a distributed network of ground stations and ingested in a cloud infrastructure, where the data is processed, stored, archived and distributed to the end users. The processing and transfer times inside the cloud, workload of the processors, automatic cataloguing and accessibility through the Internet are evaluated to validate if Future Internet technologies present advantages over traditional methods. Applicability of these technologies is evaluated to provide high added value services. Finally, the advantages of using federated testbeds to carry out large scale, industry driven experiments are analysed evaluating the feasibility of an experiment developed in the European infrastructure Fed4FIRE and its migration to a commercial cloud: SoftLayer, an IBM Company.

  18. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  19. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  20. EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK

    Directory of Open Access Journals (Sweden)

    C. Wang

    2017-10-01

    Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping,city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  1. Multi-walled carbon nanotubes/polymer composites in absence and presence of acrylic elastomer (ACM).

    Science.gov (United States)

    Kumar, S; Rath, T; Mahaling, R N; Mukherjee, M; Khatua, B B; Das, C K

    2009-05-01

    Polyetherimide/Multiwall carbon nanotube (MWNTs) nanocomposites containing as-received and modified (COOH-MWNT) carbon nanotubes were prepared through melt process in extruder and then compression molded. Thermal properties of the composites were characterized by thermo-gravimetric analysis (TGA). Field emission scanning electron microscopy (FESEM) images showed that the MWNTs were well dispersed and formed an intimate contact with the polymer matrix without any agglomeration. However the incorporation of modified carbon nanotubes formed fascinating, highly crosslinked, and compact network structure throughout the polymer matrix. This showed the increased adhesion of PEI with modified MWNTs. Scanning electron microscopy (SEM) also showed high degree of dispersion of modified MWNTs along with broken ends. Dynamic mechanical analysis (DMA) results showed a marginal increase in storage modulus (E') and glass transition temperature (T(g)) with the addition of MWNTs. Increase in tensile strength and impact strength of composites confirmed the use the MWNTs as possible reinforcement agent. Both thermal and electrical conductivity of composites increased, but effect is more pronounced on modification due to formation of network of carbon nanotubes. Addition of acrylic elastomer to developed PEI/MWNTs (modified) nanocomposites resulted in the further increase in thermal and electrical properties due to the formation of additional bond between MWNTs and acrylic elastomers at the interface. All the results presented are well corroborated by SEM and FESEM studies.

  2. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  3. Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network

    OpenAIRE

    Tomohiko Konno

    2013-01-01

    The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).

  4. Optimization of business processes of a distribution network operator. Evaluation and control; Optimierung der Geschaeftsprozesse von Verteilungsnetzbetreibern. Bewerten und steuern

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Philipp; Katzfey, Joerg [E-Bridge Consulting GmbH, Bonn (Germany)

    2012-09-10

    The assessment of business processes of a distribution network operator is often more cost-oriented. In order to optimize the own processes reasonably a holistic process management has to be used in order to measure the costs incurred, the quality of implementation and the quality of the fulfillment of the planning.

  5. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    Science.gov (United States)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold

  6. Preliminary Evaluation of Cesium Distribution for Wet Sieving Process Planned for Soil Decontamination in Japan - 13104

    Energy Technology Data Exchange (ETDEWEB)

    Enokida, Y.; Tanada, Y.; Hirabayashi, D. [Graduate School of Engineering, 1 Furo-cho Nagoya-shi, Aichi-ken, 4648603 (Japan); Sawada, K. [EcoTopia Science Institute, Nagoya University, 1 Furo-cho Nagoya-shi, Aichi-ken, 4648603 (Japan)

    2013-07-01

    For the purpose of decontaminating radioactive cesium from a huge amount of soil, which has been estimated to be 1.2x10{sup 8} m{sup 3} by excavating to a 5-cm depth from the surface of Fukushima Prefecture where a severe nuclear accident occurred at TEPCO's power generating site and has emitted a significant amount of radioactive materials, mainly radioactive cesium, a wet sieving process was selected as one of effective methods available in Japan. Some private companies have demonstrated this process for soil treatment in the Fukushima area by testing at their plants. The results were very promising, and a full-fledged application is expected to follow. In the present study, we spiked several aqueous samples containing soil collected from an industrial wet sieving plant located near our university for the recycling of construction wastes with non-radioactive cesium hydroxide. The present study provides scientific data concerning the effectiveness in volume reduction of the contaminated soil by a wet sieving process as well as the cesium distribution between the liquid phase and clay minerals for each sub-process of the full-scale one, but a simulating plant equipped with a process of coagulating sedimentation and operational safety fundamentals for the plant. Especially for the latter aspect, the study showed that clay minerals of submicron size strongly bind a high content of cesium, which was only slightly removed by coagulation with natural sedimentation (1 G) nor centrifugal sedimentation (3,700 G) and some of the cesium may be transferred to the effluent or recycled water. By applying ultracentrifugation (257,000 G), most of submicron clay minerals containing cesium was removed, and the cesium amount which might be transferred to the effluent or recycled water, could be reduced to less than 2.3 % of the original design by the addition of a cesium barrier consisting of ultracentrifugation or a hollow fiber membrane. (authors)

  7. Word and face processing engage overlapping distributed networks: Evidence from RSVP and EEG investigations.

    Science.gov (United States)

    Robinson, Amanda K; Plaut, David C; Behrmann, Marlene

    2017-07-01

    Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. A novel method for determination of particle size distribution in-process

    Science.gov (United States)

    Salaoru, Tiberiu A.; Li, Mingzhong; Wilkinson, Derek

    2009-07-01

    The pharmaceutical and fine chemicals industries are strongly concerned with the manufacture of high value-added speciality products, often in solid form. On-line measurement of solid particle size is vital for reliable control of product properties. The established techniques, such as laser diffraction or spectral extinction, require dilution of the process suspension when measuring from typical manufacturing streams because of their high concentration. Dilution to facilitate measurement can result in changes of both size and form of particles, especially during production processes such as crystallisation. In spectral extinction, the degree of light scattering and absorption by a suspension is measured. However, for concentrated suspensions the interpretation of light extinction measurements is difficult because of multiple scattering and inter-particle interaction effects and at higher concentrations extinction is essentially total so the technique can no longer be applied. At the same time, scattering by a dispersion also causes a change of phase which affects the real component of the suspension's effective refractive index which is a function of particle size and particle and dispersant refractive indices. In this work, a novel prototype instrument has been developed to measure particle size distribution in concentrated suspensions in-process by measuring suspension refractive index at incidence angles near the onset of total internal reflection. Using this technique, the light beam does not pass through the suspension being measured so suspension turbidity does not impair the measurement.

  9. Mercury Distribution in the Processing of Jatiroto Gold Mine Wonogiri Central Java Indonesia

    Science.gov (United States)

    Fitri Yudiantoro, Dwi; Nurcholis, Muhammad; Sri Sayudi, Dewi; Abdurrachman, Mirzam; Paramita Haty, Intan; Pambudi, Wiryan; Subroborini, Arum

    2017-06-01

    The research area is one of the Wonogiri gold producer. In this region there are nearly 30 gold processing locations. This area has a steep morphology which is part of Mt. Mas. The work of the gold processing is a part time job besides for the local farmer population. To get the gold bearing rocks, are by digging holes manually around Mt. Mas, while gold processing is carried out in their homes. As a result of these activities, then identified the distribution of mercury in the surrounding settlements. Analytical methods used in this study is the measurement mercury content using Hg meter on altered rocks, soil and using XRF (X-Ray Fluorescence) for plant samples. This results of research shows that there are conducted on mercury contents in the altered rocks, soil and plants showed significant mercury contents in altered rocks, soil and plants. This proves that mercury has polluted the environment surrounding residents, both of people living in the hill down on the lower plain areas. The results of this study are expected to be used as reference to help overcome the pollution of the area.

  10. Snow-borne nanosized particles: Abundance, distribution, composition, and significance in ice nucleation processes

    Science.gov (United States)

    Rangel-Alvarado, Rodrigo Benjamin; Nazarenko, Yevgen; Ariya, Parisa A.

    2015-11-01

    Physicochemical processes of nucleation constitute a major uncertainty in understanding aerosol-cloud interactions. To improve the knowledge of the ice nucleation process, we characterized physical, chemical, and biological properties of fresh snow using a suite of state-of-the-art techniques based on mass spectrometry, electron microscopy, chromatography, and optical particle sizing. Samples were collected at two North American Arctic sites, as part of international campaigns (2006 and 2009), and in the city of Montreal, Canada, over the last decade. Particle size distribution analyses, in the range of 3 nm to 10 µm, showed that nanosized particles are the most numerous (38-71%) in fresh snow, with a significant portion (11 to 19%) less than 100 nm in size. Particles with diameters less than 200 nm consistently exhibited relatively high ice-nucleating properties (on average ranged from -19.6 ± 2.4 to -8.1 ± 2.6°C). Chemical analysis of the nanosized fraction suggests that they contain bioorganic materials, such as amino acids, as well as inorganic compounds with similar characteristics to mineral dust. The implication of nanoparticle ubiquity and abundance in diverse snow ecosystems are discussed in the context of their importance in understanding atmospheric nucleation processes.

  11. Energy distribution in selected fragment vibrations in dissociation processes in polyatomic molecules

    International Nuclear Information System (INIS)

    Band, Y.B.; Freed, K.F.

    1977-01-01

    The full quantum theory of dissociation processes in polyatomic molecules is converted to a form enabling the isolation of a selected fragment vibration. This form enables the easy evaluation of the probability distribution for energy partitioning between this vibration and all other degrees of freedom that results from the sudden Franck--Condon rearrangement process. The resultant Franck--Condon factors involve the square of the one-dimensional overlap integral between effective oscillator wavefunctions and the wavefunctions for the selected fragment vibration, a form that resembles the simple golden rule model for polyatomic dissociation and reaction processes. The full quantum theory can, therefore, be viewed as providing both a rigorous justification for certain generic aspects of the simple golden rule model as well as providing a number of important generalizations thereof. Some of these involve dealing with initial bound state vibrational excitation, explicit molecule, fragment and energy dependence of the effective oscillator, and the incorporation of all isotopic dependence. In certain limiting situations the full quantum theory yields simple, readily usable analytic expressions for the frequency and equilibrium position of the effective oscillator. Specific applications are presented for the direct photodissociation of HCN, DCN, and CO 2 where comparisons between the full theory and the simple golden rule are presented. We also discuss the generalizations of the previous theory to enable the incorporation of effects of distortion in the normal modes as a function of the reaction coordinate on the repulsive potential energy surface

  12. Fiber‐optic distributed temperature sensing: A new tool for assessment and monitoring of hydrologic processes

    Science.gov (United States)

    Lane, John W.; Day-Lewis, Frederick D.; Johnson, Carole D.; Dawson, Cian B.; Nelms, David L.; Miller, Cheryl; Wheeler, Jerrod D.; Harvey, Charles F.; Karam, Hanan N.

    2008-01-01

    Fiber‐optic distributed temperature sensing (FO DTS) is an emerging technology for characterizing and monitoring a wide range of important earth processes. FO DTS utilizes laser light to measure temperature along the entire length of standard telecommunications optical fibers. The technology can measure temperature every meter over FO cables up to 30 kilometers (km) long. Commercially available systems can measure fiber temperature as often as 4 times per minute, with thermal precision ranging from 0.1 to 0.01 °C depending on measurement integration time. In 2006, the U.S. Geological Survey initiated a project to demonstrate and evaluate DTS as a technology to support hydrologic studies. This paper demonstrates the potential of the technology to assess and monitor hydrologic processes through case‐study examples of FO DTS monitoring of stream‐aquifer interaction on the Shenandoah River near Locke's Mill, Virginia, and on Fish Creek, near Jackson Hole, Wyoming, and estuary‐aquifer interaction on Waquoit Bay, Falmouth, Massachusetts. The ability to continuously observe temperature over large spatial scales with high spatial and temporal resolution provides a new opportunity to observe and monitor a wide range of hydrologic processes with application to other disciplines including hazards, climate‐change, and ecosystem monitoring.

  13. Nanoparticles dispersion in processing functionalised PP/TiO2 nanocomposites: distribution and properties

    International Nuclear Information System (INIS)

    El-Dessouky, Hassan M.; Lawrence, Carl A.

    2011-01-01

    Future innovations in textiles and fibrous materials are likely to demand fibres with enhanced multifunctionality. The fibres can be functionalized by dispersing nanoadditives into the polymer during melt compounding/spinning. TiO 2 nanoparticles have the potential to improve UV resistance, antistatic, as well as impart self-cleaning by photocatalysis and thereby de-odour and antimicrobial effects. In this study, a micro-lab twin-screw extruder was used to produce samples of polypropylene (PP) nanocomposite monofilaments, doped with nano titanium oxide (TiO 2 )/manganese oxide (MnO) compound having size ranging from 60 to 200 nm. As a control sample, PP filaments without additives were also extruded. Three samples were produced containing different concentrations (wt%) of the TiO 2 compound, i.e. 0.95, 1.24 and 1.79%. Nano metal-oxide distribution in the as-spun and drawn nanocomposite filaments was analysed. Although, there are small clusters of the nanoparticles, the characterizing techniques showed good dispersion and distribution of the modified TiO 2 along and across the processed filaments. From UV spectroscopy and TGA, a significant enhancement of polypropylene UV protection and thermal stability were observed: PP with higher percentage of TiO 2 absorbed UV wavelength of 387 nm and thermally decomposed at 320.16 °C accompanied by 95% weight loss.

  14. Industrial Qualification Process for Optical Fibers Distributed Strain and Temperature Sensing in Nuclear Waste Repositories

    Directory of Open Access Journals (Sweden)

    S. Delepine-Lesoille

    2012-01-01

    Full Text Available Temperature and strain monitoring will be implemented in the envisioned French geological repository for high- and intermediate-level long-lived nuclear wastes. Raman and Brillouin scatterings in optical fibers are efficient industrial methods to provide distributed temperature and strain measurements. Gamma radiation and hydrogen release from nuclear wastes can however affect the measurements. An industrial qualification process is successfully proposed and implemented. Induced measurement uncertainties and their physical origins are quantified. The optical fiber composition influence is assessed. Based on radiation-hard fibers and carbon-primary coatings, we showed that the proposed system can provide accurate temperature and strain measurements up to 0.5 MGy and 100% hydrogen concentration in the atmosphere, over 200 m distance range. The selected system was successfully implemented in the Andra underground laboratory, in one-to-one scale mockup of future cells, into concrete liners. We demonstrated the efficiency of simultaneous Raman and Brillouin scattering measurements to provide both strain and temperature distributed measurements. We showed that 1.3 μm working wavelength is in favor of hazardous environment monitoring.

  15. Switching field distribution and magnetization reversal process of FePt dot patterns

    Energy Technology Data Exchange (ETDEWEB)

    Ishio, S., E-mail: ishio@gipc.akita-u.ac.jp [Department of Materials Science and Engineering, Akita University, Akita 010-8502 (Japan); Takahashi, S.; Hasegawa, T.; Arakawa, A.; Sasaki, H. [Department of Materials Science and Engineering, Akita University, Akita 010-8502 (Japan); Yan, Z.; Liu, X. [Venture Business Laboratory, Akita University, Tegata Gakuen-machi, Akita 010-8502 (Japan); Kondo, Y.; Yamane, H.; Ariake, J. [Akita Prefectural R and D Center, 4-21 Sanuki, Akita 010-1623 (Japan); Suzuki, M.; Kawamura, N.; Mizumaki, M. [Japan Synchrotron Radiation Research Institute, 1-1-1, Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan)

    2014-06-01

    The fabrication of FePt nanodots with a high structural quality and the control of their switching fields are key issues in realizing high density bit pattern recording. We have prepared FePt dot patterns for dots with 15–300 nm diameters by electron beam lithography and re-annealing, and studied the relation between magnetization reversal process and structure of FePt nanodots. The switching field (H{sub sw}) of dot patterns re-annealed at 710 °C for 240 min showed a bimodal distribution, where a higher peak was found at 5–6 T, and a lower peak was found at ∼2 T. It was revealed by cross-sectional TEM analysis that the structure of dots in the pattern can be classified into two groups. One group has a high degree of order with well-defined [0 0 1] crystalline growth, and the other group includes structurally-disturbed dots like [1 1 1] growth and twin crystals. This structural inhomogeneity causes the magnetic switching field distribution observed. - Highlights: • FePt dot patterns with 15–100 nm dot diameters were prepared by EB lithography. • Maximum coercivity of 30 kOe was found in the dot pattern with 30 nm in diameter. • Magnetization reversal was studied on the base of TEM analysis and LLG simulation.

  16. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  17. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  18. Contact pressure distribution during the polishing process of ceramic tiles: A laboratory investigation

    International Nuclear Information System (INIS)

    Sani, A S A; Hamedon, Z; Azhari, A; Sousa, F J P

    2016-01-01

    During the polishing process of porcelain tiles the difference in scratching speed between innermost and peripheral abrasives leads to pressure gradients linearly distributed along the radial direction of the abrasive tool. The aim of this paper is to investigate such pressure gradient in laboratory scale. For this purpose polishing tests were performed on ceramic tiles according to the industrial practices using a custom-made CNC tribometer. Gradual wear on both abrasives and machined surface of the floor tile were measured. The experimental results suggested that the pressure gradient tends to cause an inclination of the abraded surfaces, which becomes stable after a given polishing period. In addition to the wear depth of the machined surface, the highest value of gloss and finest surface finish were observed at the lowest point of the worn out surface of the ceramic floor tile corresponding to the point of highest pressure and lowest scratching speed. (paper)

  19. Validation Studies of Temperature Distribution and Mould Filling Process for Composite Skeleton Castings

    Directory of Open Access Journals (Sweden)

    M. Cholewa

    2007-07-01

    Full Text Available In this work authors showed selected results of simulation and experimental studies on temperature distribution during solidification of composite skeleton casting and mould filling process (Fig. 4, 5, 6. The basic subject of the computer simulation was the analysis of ability of metal to fill the channels creating the skeleton shape and prepared in form of a core. Analysis of filling for each consecutive levels of the skeleton casting was conducted for simulation results and real casting. The skeleton casting was manufactured according to proposed technology (Fig. 5. Number of fully filled nodes in simulation was higher than obtained in experimental studies. It was observed in the experiment, that metal during pouring did not flow through the whole channel section, what enabled possibilities of reducing the channel section and pointed out the necessity of local pressure increase.

  20. PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry

    International Nuclear Information System (INIS)

    Leal, A.; Sanchez-Doblado, F.; Perucha, M.; Rincon, M.; Carrasco, E.; Bernal, C.

    2001-01-01

    A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)

  1. Distributed error and alarm processing in the CMS data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.

  2. Multicriterion problem of allocation of resources in the heterogeneous distributed information processing systems

    Science.gov (United States)

    Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.

    2018-05-01

    This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.

  3. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution

    NARCIS (Netherlands)

    Colen, H.B.B.; Neef, C.; Schuring, R.W.

    2003-01-01

    Background: Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incidents reports involving medication errors. Medisch

  4. Vib--rotational energy distributions and relaxation processes in pulsed HF chemical lasers

    International Nuclear Information System (INIS)

    Ben-Shaul, A.; Kompa, K.L.; Schmailzl, U.

    1976-01-01

    The rate equations governing the temporal evolution of photon densities and level populations in pulsed F+H 2 →HF+H chemical lasers are solved for different initial conditions. The rate equations are solved simultaneously for all relevant vibrational--rotational levels and vibrational--rotational P-branch transitions. Rotational equilibrium is not assumed. Approximate expressions for the detailed state-to-state rate constants corresponding to the various energy transfer processes (V--V, V--R,T, R--R,T) coupling the vib--rotational levels are formulated on the basis of experimental data, approximate theories, and qualitative considerations. The main findings are as follows: At low pressures, R--T transfer cannot compete with the stimulated emission, and the laser output largely reflects the nonequilibrium energy distribution in the pumping reaction. The various transitions reach threshold and decay almost independently and simultaneous lasing on several lines takes place. When a buffer gas is added in excess to the reacting mixture, the enhanced rotational relaxation leads to nearly single-line operation and to the J shift in lasing. Laser efficiency is higher at high inert gas pressures owing to a better extraction of the internal energy from partially inverted populations. V--V exchange enhances lasing from upper vibrational levels but reduces the total pulse intensity. V--R,T processes reduce the efficiency but do not substantially modify the spectral output distribution. The photon yield ranges between 0.4 and 1.4 photons/HF molecule depending on the initial conditions. Comparison with experimental data, when available, is fair

  5. Relationship between fiber degradation and residence time distribution in the processing of long fiber reinforced thermoplastics

    Directory of Open Access Journals (Sweden)

    2008-08-01

    Full Text Available Long fiber reinforced thermoplastics (LFT were processed by in-line compounding equipment with a modified single screw extruder. A pulse stimulus response technique using PET spheres as the tracer was adopted to obtain residence time distribution (RTD of extrusion compounding. RTD curves were fitted by the model based on the supposition that extrusion compounding was the combination of plug flow and mixed flow. Characteristic parameters of RTD model including P the fraction of plug flow reactor (PFR and d the fraction of dead volume of continuous stirred tank reactor (CSTR were used to associate with fiber degradation presented by fiber length and dispersion. The effects of screw speed, mixing length and channel depth on RTD curves, and characteristic parameters of RTD models as well as their effects on the fiber degradation were investigated. The influence of shear force with different screw speeds and variable channel depth on fiber degradation was studied and the main impetus of fiber degradation was also presented. The optimal process for obtaining the balance of fiber length and dispersion was presented.

  6. A distributive peptide cyclase processes multiple microviridin core peptides within a single polypeptide substrate.

    Science.gov (United States)

    Zhang, Yi; Li, Kunhua; Yang, Guang; McBride, Joshua L; Bruner, Steven D; Ding, Yousong

    2018-05-03

    Ribosomally synthesized and post-translationally modified peptides (RiPPs) are an important family of natural products. Their biosynthesis follows a common scheme in which the leader peptide of a precursor peptide guides the modifications of a single core peptide. Here we describe biochemical studies of the processing of multiple core peptides within a precursor peptide, rare in RiPP biosynthesis. In a cyanobacterial microviridin pathway, an ATP-grasp ligase, AMdnC, installs up to two macrolactones on each of the three core peptides within AMdnA. The enzyme catalysis occurs in a distributive fashion and follows an unstrict N-to-C overall directionality, but a strict order in macrolactonizing each core peptide. Furthermore, AMdnC is catalytically versatile to process unnatural substrates carrying one to four core peptides, and kinetic studies provide insights into its catalytic properties. Collectively, our results reveal a distinct biosynthetic logic of RiPPs, opening up the possibility of modular production via synthetic biology approaches.

  7. Web Services Implementations at Land Process and Goddard Earth Sciences Distributed Active Archive Centers

    Science.gov (United States)

    Cole, M.; Bambacus, M.; Lynnes, C.; Sauer, B.; Falke, S.; Yang, W.

    2007-12-01

    NASA's vast array of scientific data within its Distributed Active Archive Centers (DAACs) is especially valuable to both traditional research scientists as well as the emerging market of Earth Science Information Partners. For example, the air quality science and management communities are increasingly using satellite derived observations in their analyses and decision making. The Air Quality Cluster in the Federation of Earth Science Information Partners (ESIP) uses web infrastructures of interoperability, or Service Oriented Architecture (SOA), to extend data exploration, use, and analysis and provides a user environment for DAAC products. In an effort to continually offer these NASA data to the broadest research community audience, and reusing emerging technologies, both NASA's Goddard Earth Science (GES) and Land Process (LP) DAACs have engaged in a web services pilot project. Through these projects both GES and LP have exposed data through the Open Geospatial Consortiums (OGC) Web Services standards. Reusing several different existing applications and implementation techniques, GES and LP successfully exposed a variety data, through distributed systems to be ingested into multiple end-user systems. The results of this project will enable researchers world wide to access some of NASA's GES & LP DAAC data through OGC protocols. This functionality encourages inter-disciplinary research while increasing data use through advanced technologies. This paper will concentrate on the implementation and use of OGC Web Services, specifically Web Map and Web Coverage Services (WMS, WCS) at GES and LP DAACs, and the value of these services within scientific applications, including integration with the DataFed air quality web infrastructure and in the development of data analysis web applications.

  8. Sources and processes affecting the distribution of dissolved Nd isotopes and concentrations in the West Pacific

    Science.gov (United States)

    Behrens, Melanie K.; Pahnke, Katharina; Schnetger, Bernhard; Brumsack, Hans-Jürgen

    2018-02-01

    In the Atlantic, where deep circulation is vigorous, the dissolved neodymium (Nd) isotopic composition (expressed as ɛNd) is largely controlled by water mass mixing. In contrast, the factors influencing the ɛNd distribution in the Pacific, marked by sluggish circulation, is not clear yet. Indication for regional overprints in the Pacific is given based on its bordering volcanic islands. Our study aims to clarify the impact and relative importance of different Nd sources (rivers, volcanic islands), vertical (bio)geochemical processes and lateral water mass transport in controlling dissolved ɛNd and Nd concentration ([Nd]) distributions in the West Pacific between South Korea and Fiji. We find indication for unradiogenic continental input from South Korean and Chinese rivers to the East China Sea. In the tropical West Pacific, volcanic islands supply Nd to surface and subsurface waters and modify their ɛNd to radiogenic values of up to +0.7. These radiogenic signatures allow detailed tracing of currents flowing to the east and differentiation from westward currents with open ocean Pacific ɛNd composition in the complex tropical Pacific zonal current system. Modified radiogenic ɛNd of West Pacific intermediate to bottom waters upstream or within our section also indicates non-conservative behavior of ɛNd due to boundary exchange at volcanic island margins, submarine ridges, and with hydrothermal particles. Only subsurface to deep waters (3000 m) in the open Northwest Pacific show conservative behavior of ɛNd. In contrast, we find a striking correlation of extremely low (down to 2.77 pmol/kg Nd) and laterally constant [Nd] with the high-salinity North and South Pacific Tropical Water, indicating lateral transport of preformed [Nd] from the North and South Pacific subtropical gyres into the study area. This observation also explains the previously observed low subsurface [Nd] in the tropical West Pacific. Similarly, Western South Pacific Central Water, Antarctic

  9. Review of algorithms for modeling metal distribution equilibria in liquid-liquid extraction processes

    Directory of Open Access Journals (Sweden)

    Lozano, L. J.

    2005-10-01

    Full Text Available This work focuses on general guidelines to be considered for application of least-squares routines and artificial neural networks (ANN in the estimation of metal distribution equilibria in liquid-liquid extraction process. The goal of the procedure in the statistical method is to find the values of the equilibrium constants (Kj for the reactions involved in the metal extraction which minimizes the differences between experimental distribution coefficient (Dexp and theoretical distribution coefficients according to the mechanism proposed (Dtheor- In the first part of the article, results obtained with the most frequently routine reported in the bibliography are compared with those obtained using the algorithms previously discussed. In the second part, the main features of a single back-propagation neural network for the same purpose are discussed, and the results obtained are compared with those obtained with the classical methods.

    El trabajo presenta las líneas generales a considerar para la estimación del equilibrio de distribución de metales en procesos de extracción líquido-líquido, según dos métodos: algoritmo clásico de mínimos cuadrados y redes neuronales artificiales. El objetivo del procedimiento, en el caso del método estadístico, es encontrar los valores de las constantes de equilibrio (Kj para las reacciones involucradas en la extracción del metal, que minimizan las diferencias entre el coeficiente de distribución experimental y el coeficiente de distribución teórico, de acuerdo al mecanismo propuesto. En la primera parte del artículo se comparan los resultados obtenidos a partir de los algoritmos usados más habitualmente en la bibliografía, con los datos obtenidos mediante el algoritmo previamente descrito. En la segunda parte, se presentan las características fundamentales para aplicar una red neuronal sencilla con algoritmo back-propagation y los

  10. Distribution and biophysical processes of beaded streams in Arctic permafrost landscapes

    Science.gov (United States)

    Arp, Christopher D.; Whitman, Matthew S.; Jones, Benjamin M.; Grosse, Guido; Gaglioti, Benjamin V.; Heim, Kurt C.

    2015-01-01

    Beaded streams are widespread in permafrost regions and are considered a common thermokarst landform. However, little is known about their distribution, how and under what conditions they form, and how their intriguing morphology translates to ecosystem functions and habitat. Here we report on a Circum-Arctic survey of beaded streams and a watershed-scale analysis in northern Alaska using remote sensing and field studies. We mapped over 400 channel networks with beaded morphology throughout the continuous permafrost zone of northern Alaska, Canada, and Russia and found the highest abundance associated with medium- to high- ground ice content permafrost in moderately sloping terrain. In the Fish Creek watershed, beaded streams accounted for half of the drainage density, occurring primarily as low-order channels initiating from lakes and drained lake basins. Beaded streams predictably transition to alluvial channels with increasing drainage area and decreasing channel slope, although this transition is modified by local controls on water and sediment delivery. Comparison of one beaded channel using repeat photography between 1948 and 2013 indicate a relatively stable landform and 14C dating of basal sediments suggest channel formation may be as early as the Pleistocene-Holocene transition. Contemporary processes, such as deep snow accumulation in riparian zones effectively insulates channel ice and allows for perennial liquid water below most beaded stream pools. Because of this, mean annual temperatures in pool beds are greater than 2°C, leading to the development of perennial thaw bulbs or taliks underlying these thermokarst features. In the summer, some pools thermally stratify, which reduces permafrost thaw and maintains coldwater habitats. Snowmelt generated peak-flows decrease rapidly by two or more orders of magnitude to summer low flows with slow reach-scale velocity distributions ranging from 0.1 to 0.01 m/s, yet channel runs still move water rapidly

  11. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  12. Temperature profiles from mechanical bathythermograph (MBT) casts from the USS ACME in the North Pacific Ocean in support of the Fleet Observations of Oceanographic Data (FLOOD) project from 1968-04-05 to 1968-04-25 (NODC Accession 6800642)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — MBT data were collected from the USS ACME in support of the Fleet Observations of Oceanographic Data (FLOOD) project. Data were collected by US Navy; Ships of...

  13. A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.

    2010-12-01

    High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to

  14. Experimental quantum key distribution with simulated ground-to-satellite photon losses and processing limitations

    Science.gov (United States)

    Bourgoin, Jean-Philippe; Gigov, Nikolay; Higgins, Brendon L.; Yan, Zhizhong; Meyer-Scott, Evan; Khandani, Amir K.; Lütkenhaus, Norbert; Jennewein, Thomas

    2015-11-01

    Quantum key distribution (QKD) has the potential to improve communications security by offering cryptographic keys whose security relies on the fundamental properties of quantum physics. The use of a trusted quantum receiver on an orbiting satellite is the most practical near-term solution to the challenge of achieving long-distance (global-scale) QKD, currently limited to a few hundred kilometers on the ground. This scenario presents unique challenges, such as high photon losses and restricted classical data transmission and processing power due to the limitations of a typical satellite platform. Here we demonstrate the feasibility of such a system by implementing a QKD protocol, with optical transmission and full post-processing, in the high-loss regime using minimized computing hardware at the receiver. Employing weak coherent pulses with decoy states, we demonstrate the production of secure key bits at up to 56.5 dB of photon loss. We further illustrate the feasibility of a satellite uplink by generating a secure key while experimentally emulating the varying losses predicted for realistic low-Earth-orbit satellite passes at 600 km altitude. With a 76 MHz source and including finite-size analysis, we extract 3374 bits of a secure key from the best pass. We also illustrate the potential benefit of combining multiple passes together: while one suboptimal "upper-quartile" pass produces no finite-sized key with our source, the combination of three such passes allows us to extract 165 bits of a secure key. Alternatively, we find that by increasing the signal rate to 300 MHz it would be possible to extract 21 570 bits of a secure finite-sized key in just a single upper-quartile pass.

  15. Processing and Characterization of a Novel Distributed Strain Sensor Using Carbon Nanotube-Based Nonwoven Composites

    Directory of Open Access Journals (Sweden)

    Hongbo Dai

    2015-07-01

    Full Text Available This paper describes the development of an innovative carbon nanotube-based non-woven composite sensor that can be tailored for strain sensing properties and potentially offers a reliable and cost-effective sensing option for structural health monitoring (SHM. This novel strain sensor is fabricated using a readily scalable process of coating Carbon nanotubes (CNT onto a nonwoven carrier fabric to form an electrically-isotropic conductive network. Epoxy is then infused into the CNT-modified fabric to form a free-standing nanocomposite strain sensor. By measuring the changes in the electrical properties of the sensing composite the deformation can be measured in real-time. The sensors are repeatable and linear up to 0.4% strain. Highest elastic strain gage factors of 1.9 and 4.0 have been achieved in the longitudinal and transverse direction, respectively. Although the longitudinal gage factor of the newly formed nanocomposite sensor is close to some metallic foil strain gages, the proposed sensing methodology offers spatial coverage, manufacturing customizability, distributed sensing capability as well as transverse sensitivity.

  16. The unfolding effects of transfer functions and processing of the pulse height distributions

    Directory of Open Access Journals (Sweden)

    Avdić Senada

    2010-01-01

    Full Text Available This paper deals with the improvements of the linear artificial neural network unfolding approach aimed at accurately determining the incident neutron spectrum. The effects of the transfer functions and pre-processing of the simulated pulse height distributions from liquid scintillation detectors on the artificial neural networks performance have been studied. A better energy resolution and higher reliability of the linear artificial neural network technique have been achieved after implementation of the results of this study. The optimized structure of the network was used to unfold both monoenergetic and continuous neutron energy spectra, such as the spectra of 252Cf and 241Am-Be sources, traditionally used in the nuclear safeguards experiments. We have demonstrated that the artificial neural network energy resolution of 0.1 MeV is comparable with the one obtained by the reference maximum likelihood expectation-maximization method which was implemented by using the one step late algorithm. Although the maximum likelihood algorithm provides the unfolded results of higher accuracy, especially for continuous neutron sources, the artificial neural network approach with the improved performances is more suitable for fast and robust determination of the neutron spectra with sufficient accuracy.

  17. Distributions in four-fermion processes for W-physics at LEP 2

    International Nuclear Information System (INIS)

    Accomando, E.; Ballestrero, A.; Passarino, G.

    1996-01-01

    The programs WPHACT and WTO, which are designed for computing cross sections and other relevant observables in the e + e - annihilation into four fermions, are used to make detailed and complete predictions for the semi-leptonic and fully hadronic channels e + e - →qqlν, qqqq. Both the total cross sections in the LEP 2 energy range and some of the most relevant distributions are analyzed. Particular algorithms are introduced for the fully hadronic channels in order to analyze the WW physics and to properly define the signal versus the background. With appropriate kinematical cuts it has been shown that the neutral-current background can be made vanishingly small when the problem of determining the W-boson mass is addressed. The remaining background from the complete charge-current and mixed processes is again small but not completely negligible. A detailed discussion is performed on the validity of the most relevant approximations such as the double-resonant one. The inclusion of a final state QCD correction, in its naive form (NQCD), is discussed and various implementations are examined. (orig.)

  18. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  19. Reexamination of fission fragment angular distributions and the fission process: Formalism

    International Nuclear Information System (INIS)

    Bond, P.D.

    1985-01-01

    The theory of fission fragment angular distributions is examined and the universally used expression is found to be valid only under restrictive assumptions. A more general angular distribution formula is derived and applied to recent data of high spin systems. At the same time it is shown that the strong anisotropies observed from such systems can be understood without changing the essential basis of standard fission theory. The effects of reaction mechanisms other than complete fusion on fission fragment angular distributions are discussed and possible angular distribution signatures of noncompound nucleus formation are mentioned

  20. The complete information for phenomenal distributed parameter control of multicomponent chemical processes in gas, fluid and solid phase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    A constitutive mathematical model of distributed parameters of multicomponent chemical processes in gas, fluid and solid phase is utilized to the realization of phenomenal distributed parameter control of these processes. Original systems of partial differential constitutive state equations, in the following derivative forms /I/, /II/ and /III/ are solved in this paper from the point of view of information for phenomenal distributed parameter control of considered processes. Obtained in this way for multicomponent chemical processes in gas, fluid and solid phase: -dynamical working space-time characteristics/analytical solutions in working space-time of chemical reactors/, -dynamical phenomenal Green functions as working space-time transfer functions, -statical working space characteristics /analytical solutions in working space of chemical reactors/, -statical phenomenal Green functions as working space transfer functions, are applied, as information for realization of constitutive distributed parameter control of mass, energy and momentum aspects of above processes. Two cases are considered by existence of: A/sup o/ - initial conditions, B/sup o/ - initial and boundary conditions, for multicomponent chemical processes in gas, fluid and solid phase

  1. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    International Nuclear Information System (INIS)

    Lacasa, Lucas

    2014-01-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein–Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments. (paper)

  2. Fast analysis of molecular dynamics trajectories with graphics processing units-Radial distribution function histogramming

    International Nuclear Information System (INIS)

    Levine, Benjamin G.; Stone, John E.; Kohlmeyer, Axel

    2011-01-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  3. A community dataspace for distribution and processing of "long tail" high resolution topography data

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Arrowsmith, R.

    2016-12-01

    Topography is a fundamental observable for Earth and environmental science and engineering. High resolution topography (HRT) is revolutionary for Earth science. Cyberinfrastructure that enables users to discover, manage, share, and process these data increases the impact of investments in data collection and catalyzes scientific discovery.National Science Foundation funded OpenTopography (OT, www.opentopography.org) employs cyberinfrastructure that includes large-scale data management, high-performance computing, and service-oriented architectures, providing researchers with efficient online access to large, HRT (mostly lidar) datasets, metadata, and processing tools. HRT data are collected from satellite, airborne, and terrestrial platforms at increasingly finer resolutions, greater accuracy, and shorter repeat times. There has been a steady increase in OT data holdings due to partnerships and collaborations with various organizations with the academic NSF domain and beyond.With the decreasing costs of HRT data collection, via methods such as Structure from Motion, the number of researchers collecting these data is increasing. Researchers collecting these "long- tail" topography data (of modest size but great value) face an impediment, especially with costs associated in making them widely discoverable, shared, annotated, cited, managed and archived. Also because there are no existing central repositories or services to support storage and curation of these datasets, much of it is isolated and difficult to locate and preserve. To overcome these barriers and provide efficient centralized access to these high impact datasets, OT is developing a "Community DataSpace", a service built on a low cost storage cloud, (e.g. AWS S3) to make it easy for researchers to upload, curate, annotate and distribute their datasets. The system's ingestion workflow will extract metadata from data uploaded; validate it; assign a digital object identifier (DOI); and create a searchable

  4. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution.

    Science.gov (United States)

    Colen, Hadewig B; Neef, Cees; Schuring, Roel W

    2003-06-01

    Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.

  5. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  6. A Framework for Process Reengineering in Higher Education: A case study of distance learning exam scheduling and distribution

    Directory of Open Access Journals (Sweden)

    M'hammed Abdous

    2008-10-01

    Full Text Available In this paper, we propose a conceptual and operational framework for process reengineering (PR in higher education (HE institutions. Using a case study aimed at streamlining exam scheduling and distribution in a distance learning (DL unit, we outline a sequential and non-linear four-step framework designed to reengineer processes. The first two steps of this framework – initiating and analyzing – are used to initiate, document, and flowchart the process targeted for reengineering, and the last two steps – reengineering/ implementing and evaluating – are intended to prototype, implement, and evaluate the reengineered process. Our early involvement of all stakeholders, and our in-depth analysis and documentation of the existing process, allowed us to avoid the traditional pitfalls associated with business process reengineering (BPR. Consequently, the outcome of our case study indicates a streamlined and efficient process with a higher faculty satisfaction at substantial cost reduction.

  7. Evaluation of processing factors for selected organic contaminants during virgin olive oil production: Distribution of BTEXS during olives processing.

    Science.gov (United States)

    López-Blanco, Rafael; Gilbert-López, Bienvenida; Rojas-Jiménez, Rubén; Robles-Molina, José; Ramos-Martos, Natividad; García-Reyes, Juan F; Molina-Díaz, Antonio

    2016-05-15

    The presence of BTEXS (benzene, toluene, ethylbenzene, xylenes and styrene) in virgin olive oils can be attributed to environmental contamination, but also to biological processes during oil lipogenesis (styrene). In this work, the processing factor of BTEXS from olives to olive oil during its production was evaluated at lab-scale with an Abencor system. Benzene showed the lowest processing factor (15%), whereas toluene and xylenes showed an intermediate behavior (with 40-60% efficiency), and ethylbenzene and styrene were completely transferred (100%). In addition, an attempt to examine the contribution of potential sources to olives contamination with BTEXS was carried out for the first time. Two types of olives samples were classified according to their proximity to the contamination source (road). Although higher levels of BTEXS were found in samples close to roads, the concentrations were relatively low and do not constitute a major contribution to BTEXS usually detected in olive oil. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Stochastic processes in the social sciences: Markets, prices and wealth distributions

    Science.gov (United States)

    Romero, Natalia E.

    The present work uses statistical mechanics tools to investigate the dynamics of markets, prices, trades and wealth distribution. We studied the evolution of market dynamics in different stages of historical development by analyzing commodity prices from two distinct periods ancient Babylon, and medieval and early modern England. We find that the first-digit distributions of both Babylon and England commodity prices follow Benfords law, indicating that the data represent empirical observations typically arising from a free market. Further, we find that the normalized prices of both Babylon and England agricultural commodities are characterized by stretched exponential distributions, and exhibit persistent correlations of a power law type over long periods of up to several centuries, in contrast to contemporary markets. Our findings suggest that similar market interactions may underlie the dynamics of ancient agricultural commodity prices, and that these interactions may remain stable across centuries. To further investigate the dynamics of markets we present the analogy between transfers of money between individuals and the transfer of energy through particle collisions by means of the kinetic theory of gases. We introduce a theoretical framework of how the micro rules of trading lead to the emergence of income and wealth distribution. Particularly, we study the effects of different types of distribution of savings/investments among individuals in a society and different welfare/subsidies redistribution policies. Results show that while considering savings propensities the models approach empirical distributions of wealth quite well the effect of redistribution better captures specific features of the distributions which earlier models failed to do; moreover the models still preserve the exponential decay observed in empirical income distributions reported by tax data and surveys.

  9. Intelligent Monitoring System with High Temperature Distributed Fiberoptic Sensor for Power Plant Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Kwang Y. Lee; Stuart S. Yin; Andre Boehman

    2006-09-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatial resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, we have set up a dedicated high power, ultrafast laser system for fabricating in-fiber gratings in harsh environment optical fibers, successfully fabricated gratings in single crystal sapphire fibers by the high power laser system, and developed highly sensitive long period gratings (lpg) by electric arc. Under Task 2, relevant mathematical modeling studies of NOx formation in practical combustors have been completed. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we have investigated a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. Given a set of empirical data with no analytic expression, we first developed an analytic description and then extended that model along a single axis.

  10. Waiting-time distributions of magnetic discontinuities: Clustering or Poisson process?

    International Nuclear Information System (INIS)

    Greco, A.; Matthaeus, W. H.; Servidio, S.; Dmitruk, P.

    2009-01-01

    Using solar wind data from the Advanced Composition Explorer spacecraft, with the support of Hall magnetohydrodynamic simulations, the waiting-time distributions of magnetic discontinuities have been analyzed. A possible phenomenon of clusterization of these discontinuities is studied in detail. We perform a local Poisson's analysis in order to establish if these intermittent events are randomly distributed or not. Possible implications about the nature of solar wind discontinuities are discussed.

  11. Transverse momentum dependent quark distributions and polarized Drell-Yan processes

    OpenAIRE

    Zhou, Jian; Yuan, Feng; Liang, Zuo-Tang

    2009-01-01

    We study the spin-dependent quark distributions at large transverse momentum. We derive their transverse momentum behaviors in the collinear factorization approach in this region. We further calculate the angular distribution of the Drell-Yan lepton pair production with polarized beams and present the results in terms of the collinear twist-three quark-gluon correlation functions. In the intermediate transverse momentum region, we find that the two pproaches: the collinear factorization and t...

  12. Combining Generalized Renewal Processes with Non-Extensive Entropy-Based q-Distributions for Reliability Applications

    Directory of Open Access Journals (Sweden)

    Isis Didier Lins

    2018-03-01

    Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.

  13. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  14. Continuous soil maps - a fuzzy set approach to bridge the gap between aggregation levels of process and distribution models

    NARCIS (Netherlands)

    Gruijter, de J.J.; Walvoort, D.J.J.; Gaans, van P.F.M.

    1997-01-01

    Soil maps as multi-purpose models of spatial soil distribution have a much higher level of aggregation (map units) than the models of soil processes and land-use effects that need input from soil maps. This mismatch between aggregation levels is particularly detrimental in the context of precision

  15. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source

    DEFF Research Database (Denmark)

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten

    2015-01-01

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial...... the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use....

  16. Distribution and detection of Shiga toxin-producing Escherichia coli (STEC) during an industrial grinding process of beef trim

    Science.gov (United States)

    During the grinding and packaging processes, it is important to understand how Shiga toxin-producing Escherichia coli (STEC) would be distributed and how well it could be detected in beef trim. This study is important because it shows what would happen if contaminated meat is allowed into a commerc...

  17. Configuration and supervision of advanced distributed data acquisition and processing systems for long pulse experiments using JINI technology

    International Nuclear Information System (INIS)

    Gonzalez, Joaquin; Ruiz, Mariano; Barrera, Eduardo; Lopez, Juan Manuel; de Arcas, Guillermo; Vega, Jesus

    2009-01-01

    The development of tools for managing the capabilities and functionalities of distributed data acquisition systems is essential in long pulse fusion experiments. The intelligent test and measurement system (ITMS) developed by UPM and CIEMAT is a technology that permits implementation of a scalable data acquisition and processing system based on PXI or CompactPCI hardware. Several applications based on JINI technology have been developed to enable use of this platform for extensive implementation of distributed data acquisition and processing systems. JINI provides a framework for developing service-oriented, distributed applications. The applications are based on the paradigm of a JINI federation that supports mechanisms for publication, discovering, subscription, and links to remote services. The model we implemented in the ITMS platform included services in the system CPU (SCPU) and peripheral CPUs (PCPUs). The resulting system demonstrated the following capabilities: (1) setup of the data acquisition and processing to apply to the signals, (2) information about the evolution of the data acquisition, (3) information about the applied data processing and (4) detection and distribution of the events detected by the ITMS software applications. With this approach, software applications running on the ITMS platform can be understood, from the perspective of their implementation details, as a set of dynamic, accessible, and transparent services. The search for services is performed using the publication and subscription mechanisms of the JINI specification. The configuration and supervision applications were developed using remotely accessible (LAN or WAN) objects. The consequence of this approach is a hardware and software architecture that provides a transparent model of remote configuration and supervision, and thereby a means to simplify the implementation of a distributed data acquisition system with scalable and dynamic local processing capability developed in a

  18. Configuration and supervision of advanced distributed data acquisition and processing systems for long pulse experiments using JINI technology

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Joaquin; Ruiz, Mariano [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Ctra. Valencia Km-7, 28031, Madrid (Spain); Barrera, Eduardo [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Ctra. Valencia Km-7, 28031, Madrid (Spain)], E-mail: eduardo.barrera@upm.es; Lopez, Juan Manuel; de Arcas, Guillermo [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Ctra. Valencia Km-7, 28031, Madrid (Spain); Vega, Jesus [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense 22, 28040, Madrid (Spain)

    2009-06-15

    The development of tools for managing the capabilities and functionalities of distributed data acquisition systems is essential in long pulse fusion experiments. The intelligent test and measurement system (ITMS) developed by UPM and CIEMAT is a technology that permits implementation of a scalable data acquisition and processing system based on PXI or CompactPCI hardware. Several applications based on JINI technology have been developed to enable use of this platform for extensive implementation of distributed data acquisition and processing systems. JINI provides a framework for developing service-oriented, distributed applications. The applications are based on the paradigm of a JINI federation that supports mechanisms for publication, discovering, subscription, and links to remote services. The model we implemented in the ITMS platform included services in the system CPU (SCPU) and peripheral CPUs (PCPUs). The resulting system demonstrated the following capabilities: (1) setup of the data acquisition and processing to apply to the signals, (2) information about the evolution of the data acquisition, (3) information about the applied data processing and (4) detection and distribution of the events detected by the ITMS software applications. With this approach, software applications running on the ITMS platform can be understood, from the perspective of their implementation details, as a set of dynamic, accessible, and transparent services. The search for services is performed using the publication and subscription mechanisms of the JINI specification. The configuration and supervision applications were developed using remotely accessible (LAN or WAN) objects. The consequence of this approach is a hardware and software architecture that provides a transparent model of remote configuration and supervision, and thereby a means to simplify the implementation of a distributed data acquisition system with scalable and dynamic local processing capability developed in a

  19. Design of ET(B) receptor agonists: NMR spectroscopic and conformational studies of ET7-21[Leu7, Aib11, Cys(Acm)15].

    Science.gov (United States)

    Hewage, Chandralal M; Jiang, Lu; Parkinson, John A; Ramage, Robert; Sadler, Ian H

    2002-03-01

    In a previous report we have shown that the endothelin-B receptor-selective linear endothelin peptide, ET-1[Cys (Acm)1,15, Ala3, Leu7, Aib11], folds into an alpha-helical conformation in a methanol-d3/water co-solvent [Hewage et al. (1998) FEBS Lett., 425, 234-238]. To study the requirements for the structure-activity relationships, truncated analogues of this peptide were subjected to further studies. Here we report the solution conformation of ET7-21[Leu7, Aib11, Cys(Acm)15], in a methanol-d3/water co-solvent at pH 3.6, by NMR spectroscopic and molecular modelling studies. Further truncation of this short peptide results in it displaying poor agonist activity. The modelled structure shows that the peptide folds into an alpha-helical conformation between residues Lys9-His16, whereas the C-terminus prefers no fixed conformation. This truncated linear endothelin analogue is pivotal for designing endothelin-B receptor agonists.

  20. Analysis of Residual Nuclide in a ACM and ACCT of 100-MeV proton beamline By measurement X-ray Spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jeong-Min; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Gyeongju (Korea, Republic of)

    2015-10-15

    The proton beam is provides to users as various energy range from 20 MeV to 100 MeV. After protons generated from the ion source are accelerated to 100 MeV and irradiated to target through bending magnet and AC magnet. At this time, relatively high dose X-ray is emitted due to collision of proton and components of beamline. The generated X-ray is remaining after the accelerator is turned off and analyzing residual nuclides through the measurement of X-ray spectrum. Then identify the components that are the primary cause of residual nuclides are detected form the AC magnet(ACM) and associated components (ACCT). Analysis of the X-ray spectrum generated form the AC magnet(ACM) and AC current transformer(ACCT) of 100 MeV beamline according to the proton beam irradiation, most of the residual nuclides are identified it can be seen that emission in the stainless steel by beam loss.

  1. Voltage distribution in tapered winding of tesla-transformer during discharge process of PFL

    International Nuclear Information System (INIS)

    Xin Jiaqi; Chang Anbi; Li Mingjia; Kang Qiang

    2007-01-01

    The operation principle of integral construction of Tesla transformer and PFL was investigated in Tesla-transformer-type accelerator. Experiment was carried out on Tesla transformer's secondary winding to study the impulse voltage distribution while PFL was discharging. The regularities of turn-ground voltage distribution and interturn voltage distribution were summarized. Voltage distribution within PFL was calculated and it was compared with the experimental result. Structural winding of parallel coils in the head, parallel coils in the end and shading ring were used to improve voltage distribution and that was testified by experiment. The results indicate that taper winding doesn't effect electric field within PFL, the turn-ground voltage appears linearly, the interturn voltage fluctuates seriously and it is the biggest in head of winding. The three optimized methods help to depress oscillation, the structural winding of parallel coils in the head decreases the interturn voltage in head of winding remark-ably and the parallel coils in the end decrease the interturn voltage in the end. (authors)

  2. Processing statistics: an examination of focused and distributed attention using event related potentials.

    Science.gov (United States)

    Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan

    2013-06-07

    Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Multiplicity distributions and multiplicity correlations in sequential, off-equilibrium fragmentation process

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A new kinetic fragmentation model, the Fragmentation - Inactivation -Binary (FIB) model is described where a dissipative process stops randomly the sequential, conservative and off-equilibrium fragmentation process. (K.A.)

  4. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    Science.gov (United States)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  5. Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking

    Science.gov (United States)

    Rash, Agnes M.; Winkel, Brian J.

    2009-01-01

    This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…

  6. On constitutive modelling and information for phenomenal distributed parameter control of multicomponent chemical processes in fluid- and solidphase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    The problem under consideration is to find common physicochemical conditions of kinetics and phenomena of multicomponent chemical processes in fluid- and solidphase, deciding yield and quality of final products of these processes. The paper is devoted to the construction of a fundamental distributed parameter constitutive theory of physicochemical modelling of these chemical processes treated from the view of isotropic and anisotropic nonhomogeneous media with space and time memories. On the basis of definition of derivative and constitutive equations of continuity, original system of partial differential constitutive state equations are deduced

  7. Study of temperature distribution of pipes heated by moving rectangular gauss distribution heat source. Development of pipe outer surface irradiated laser stress improvement process (L-SIP)

    International Nuclear Information System (INIS)

    Ohta, Takahiro; Kamo, Kazuhiko; Asada, Seiji; Terasaki, Toshio

    2009-01-01

    The new process called L-SIP (outer surface irradiated Laser Stress Improvement Process) is developed to improve the tensile residual stress of the inner surface near the butt welded joints of pipes in the compression stress. The temperature gradient occurs in the thickness of pipes in heating the outer surface rapidly by laser beam. By the thermal expansion difference between the inner surface and the outer surface, the compression stress occurs near the inner surface of pipes. In this paper, the theoretical equation for the temperature distributions of pipes heated by moving rectangular Gauss distribution heat source on the outer surface is derived. The temperature histories of pipes calculated by theoretical equation agree well with FEM analysis results. According to the theoretical equation, the controlling parameters of temperature distributions and histories are q/2a y , vh, a x /h and a y /h, where q is total heat input, a y is heat source length in the axial direction, a x is Gaussian radius of heat source in the hoop direction, ν is moving velocity, and h is thickness of the pipe. The essential variables for L-SIP, which are defined on the basis of the measured temperature histories on the outer surface of the pipe, are Tmax, F 0 =kτ 0 /h 2 , vh, W Q and L Q , where Tmax is maximum temperature on the monitor point of the outer surface, k is thermal diffusivity coefficient, τ 0 is the temperature rise time from 100degC to maximum temperature on the monitor point of the outer surface, W Q is τ 0 x ν, and L Q is the uniform temperature length in the axial direction. It is verified that the essential variables for L-SIP match the controlling parameters by the theoretical equation. (author)

  8. Raw materials exploitation in Prehistory of Georgia: sourcing, processing and distribution

    Science.gov (United States)

    Tushabramishvili, Nikoloz; Oqrostsvaridze, Avthandil

    2016-04-01

    Study of raw materials has a big importance to understand the ecology, cognition, behavior, technology, culture of the Paleolithic human populations. Unfortunately, explorations of the sourcing, processing and distribution of stone raw materials had a less attention until the present days. The reasons of that were: incomplete knowledge of the archaeologists who are doing the late period archaeology (Bronze Age-Medieval) and who are little bit far from the Paleolithic technology and typology; Ignorance of the stone artifacts made on different kind of raw-materials, except flint and obsidians. Studies on the origin of the stone raw materials are becoming increasingly important since in our days. Interesting picture and situation have been detected on the different sites and in different regions of Georgia. In earlier stages of Middle Paleolithic of Djruchula Basin caves the number of basalt, andesite, argillite etc. raw materials are quite big. Since 130 000 a percent of the flint raw-material is increasing dramatically. Flint is an almost lonely dominated raw-material in Western Georgia during thousand years. Since approximately 50 000 ago the first obsidians brought from the South Georgia, appeared in Western Georgia. Similar situation has been detected by us in Eastern Georgia during our excavations of Ziari and Pkhoveli open-air sites. The early Lower Paleolithic layers are extremely rich by limestone artifacts while the flint raw-materials are dominated in the Middle Paleolithic layers. Study of these issues is possible to achieve across chronologies, the origins of the sources of raw-materials, the sites and regions. By merging archaeology with anthropology, geology and geography we are able to acquire outstanding insights about those populations. New approach to the Paleolithic stone materials, newly found Paleolithic quarries gave us an opportunities to try to achieve some results for understanding of the behavior of Paleolithic populations, geology and

  9. Process and equipment for monitoring flux distribution in a nuclear reactor outside the core

    International Nuclear Information System (INIS)

    Graham, K.F.; Gopal, R.

    1977-01-01

    This concerns the monitoring system for axial flux distribution during the whole load operating range lying outside the core of, for example, a PWR. Flux distribution cards can be produced continuously. The core is divided into at least three sections, which are formed by dividing it at right angles to the longitudinal axis, and the flux is measured outside the core using adjacent detectors. Their output signals are calibrated by amplifiers so that the load distribution in the associated sections is reproduced. A summation of the calibrated output signals and the formation of a mean load signal takes place in summing stages. For monitoring, this is compared with a value which corresponds to the maximum permissible load setting. Apart from this the position of the control rods in the core can be taken into account by multiplication of the mean load signals by suitable peak factors. The distribution of monitoring positions or the position of the detectors can be progressive or symmetrical along the axis. (DG) 891 HP [de

  10. Distribution and rate of microbial processes in ammonia-loaded air filter biofilm

    DEFF Research Database (Denmark)

    Juhler, Susanne; Nielsen, Lars Peter; Schramm, Andreas

    2009-01-01

    The in situ activity and distribution of heterotrophic and nitrifying bacteria and their potential interactions were investigated in a full-scale, two-section, trickling filter designed for biological degradation of volatile organics and NH3 in ventilation air from pig farms. The filter biofilm...

  11. Temporal distribution of earthquakes using renewal process in the Dasht-e-Bayaz region

    Science.gov (United States)

    Mousavi, Mehdi; Salehi, Masoud

    2018-01-01

    Temporal distribution of earthquakes with M w > 6 in the Dasht-e-Bayaz region, eastern Iran has been investigated using time-dependent models. Based on these types of models, it is assumed that the times between consecutive large earthquakes follow a certain statistical distribution. For this purpose, four time-dependent inter-event distributions including the Weibull, Gamma, Lognormal, and the Brownian Passage Time (BPT) are used in this study and the associated parameters are estimated using the method of maximum likelihood estimation. The suitable distribution is selected based on logarithm likelihood function and Bayesian Information Criterion. The probability of the occurrence of the next large earthquake during a specified interval of time was calculated for each model. Then, the concept of conditional probability has been applied to forecast the next major ( M w > 6) earthquake in the site of our interest. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within a specified time, space, and magnitude windows. According to obtained results, the probability of occurrence of an earthquake with M w > 6 in the near future is significantly high.

  12. Simultaneous measurement of current and temperature distributions in a proton exchange membrane fuel cell during cold start processes

    International Nuclear Information System (INIS)

    Jiao Kui; Alaefour, Ibrahim E.; Karimi, Gholamreza; Li Xianguo

    2011-01-01

    Cold start is critical to the commercialization of proton exchange membrane fuel cell (PEMFC) in automotive applications. Dynamic distributions of current and temperature in PEMFC during various cold start processes determine the cold start characteristics, and are required for the optimization of design and operational strategy. This study focuses on an investigation of the cold start characteristics of a PEMFC through the simultaneous measurements of current and temperature distributions. An analytical model for quick estimate of purging duration is also developed. During the failed cold start process, the highest current density is initially near the inlet region of the flow channels, then it moves downstream, reaching the outlet region eventually. Almost half of the cell current is produced in the inlet region before the cell current peaks, and the region around the middle of the cell has the best survivability. These two regions are therefore more important than other regions for successful cold start through design and operational strategy, such as reducing the ice formation and enhancing the heat generation in these two regions. The evolution of the overall current density distribution over time remains similar during the successful cold start process; the current density is the highest near the flow channel inlets and generally decreases along the flow direction. For both the failed and the successful cold start processes, the highest temperature is initially in the flow channel inlet region, and is then around the middle of the cell after the overall peak current density is reached. The ice melting and liquid formation during the successful cold start process have negligible influence on the general current and temperature distributions.

  13. s-process studies in the light of new experimental cross sections: Distribution of neutron fluences and r-process residuals

    International Nuclear Information System (INIS)

    Kaeppeler, F.; Beer, H.; Wisshak, K.; Clayton, D.D.; Macklin, R.L.; Ward, R.A.

    1981-08-01

    A best set of neutron-capture cross sections has been evaluated for the most important s-process isotopes. With this data base, s-process studies have been carried out using the traditional model which assumes a steady neutron flux and an exponential distribution of neutron irradiations. The calculated sigmaN-curve is in excellent agreement with the empirical sigmaN-values of pure s-process nuclei. Simultaneously, good agreement is found between the difference of solar and s-process abundances and the abundances of pure r-process nuclei. We also discuss the abundance pattern of the iron group elements where our s-process results complement the abundances obtained from explosive nuclear burning. The results obtained from the traditional s-process model such as seed abundances, mean neutron irradiations, or neutron densities are compared to recent stellar model calculations which assume the He-burning shells of red giant stars as the site for the s-process. (orig.) [de

  14. Influences on Distribution of Solute Atoms in Cu-8Fe Alloy Solidification Process Under Rotating Magnetic Field

    Science.gov (United States)

    Zou, Jin; Zhai, Qi-Jie; Liu, Fang-Yu; Liu, Ke-Ming; Lu, De-Ping

    2018-05-01

    A rotating magnetic field (RMF) was applied in the solidification process of Cu-8Fe alloy. Focus on the mechanism of RMF on the solid solution Fe(Cu) atoms in Cu-8Fe alloy, the influences of RMF on solidification structure, solute distribution, and material properties were discussed. Results show that the solidification behavior of Cu-Fe alloy have influenced through the change of temperature and solute fields in the presence of an applied RMF. The Fe dendrites were refined and transformed to rosettes or spherical grains under forced convection. The solute distribution in Cu-rich phase and Fe-rich phase were changed because of the variation of the supercooling degree and the solidification rate. Further, the variation in solute distribution was impacted the strengthening mechanism and conductive mechanism of the material.

  15. Proposal for Reduction of Calibration Process in Reference to Trip Distribution Method

    Directory of Open Access Journals (Sweden)

    Katalin Tanczos

    2009-01-01

    Full Text Available The nowadays applied different macro models or parts ofthose, which describe the urban environment, can originate in afour-step modelling process. The paper focuses on the trip distributionprocess (the 2"d step because of its significant calibrationrequirements. Therefore, it is possible to make the entiremodelling process more reliable (dependent upon the reliabilityof the available databases .

  16. National Space Transportation System telemetry distribution and processing, NASA-JFK Space Center/Cape Canaveral

    Science.gov (United States)

    Jenkins, George

    1986-01-01

    Prelaunch, launch, mission, and landing distribution of RF and hardline uplink/downlink information between Space Shuttle Orbiter/cargo elements, tracking antennas, and control centers at JSC, KSC, MSFC, GSFC, ESMC/RCC, and Sunnyvale are presented as functional block diagrams. Typical mismatch problems encountered during spacecraft-to-project control center telemetry transmissions are listed along with new items for future support enhancement.

  17. The average angular distribution of emitted particles in multi-step compound processes

    International Nuclear Information System (INIS)

    Bonetti, R.; Carlson, B.V.; Hussein, M.S.; Toledo, A.S. de

    1983-05-01

    A simple model for the differential cross-section that describes the angular distribution of emitted particles in heavy-ion induced multi-step compound reactions, is constructed. It is suggested that through a careful analysis of the deviations of the experimental data from the pure Hauser-Feshbach behaviour may shed light on the physical nature of the pre-compound, heavy-ion configuration. (Author) [pt

  18. Processes determining the marine alkalinity and calcium carbonate saturation state distributions

    OpenAIRE

    Carter, B. R.; Toggweiler, J. R.; Key, R. M.; Sarmiento, J. L.

    2014-01-01

    We introduce a composite tracer for the marine system, Alk*, that has a global distribution primarily determined by CaCO3 precipitation and dissolution. Alk* is also affected by riverine alkalinity from dissolved terrestrial carbonate minerals. We estimate that the Arctic receives approximately twice the riverine alkalinity per unit area as the Atlantic, and 8 times that of the other oceans. Riverine inputs broadly elevate Alk* in the Arctic surface and particularly near riv...

  19. Picoseconds pulse generation and pulse width determination processes of a distributed feedback dye laser

    International Nuclear Information System (INIS)

    Abdul Ghani, B.; Hammadi, M.

    2004-08-01

    A mathematical model has been developed to describe the dynamic emission of Nd-glass, distributed feedback dye laser (DFDL), and periodical grating temperature. The suggested model allows the investigation of the time behavior of Nd-glass laser and DFDL pulsed. Moreover, it allows studying the effect of the laser input parameters of Nd-glass laser on the spectral characteristics of the output DFDL pulses such as pulse width, delay time, and time separation

  20. Effects of electric field and charge distribution on nanoelectronic processes involving conducting polymers

    International Nuclear Information System (INIS)

    Ramos, Marta M.D.; Correia, Helena M.G.

    2006-01-01

    The injection of charge carriers in conducting polymer layers gives rise to local electric fields which should have serious implications on the charge transport through the polymer layer. The charge distribution and the related electric field inside the ensemble of polymer molecules, with different molecular arrangements at nanoscale, determine whether or not intra-molecular charge transport takes place and the preferential direction for charge hopping between neighbouring molecules. Consequently, these factors play a significant role in the competition between current flow, charge trapping and recombination in polymer-based electronic devices. By suitable Monte Carlo calculations, we simulated the continuous injection of electrons and holes into polymer layers with different microstructures and followed their transport through those polymer networks. Results of these simulations provided a detailed picture of charge and electric field distribution in the polymer layer and allowed us to assess the consequences for current transport and recombination efficiency as well as the distribution of recombination events within the polymer film. In the steady state we found an accumulation of electrons and holes near the collecting electrodes giving rise to an internal electric field which is greater than the external applied field close to the electrodes and lower than the one in the central region of the polymer layer. We also found that a strong variation of electric field inside the polymer layer leads to an increase of recombination events in regions inside the polymer layer where the values of the internal electric field are lower

  1. A framework for analysis of abortive colony size distributions using a model of branching processes in irradiated normal human fibroblasts.

    Science.gov (United States)

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Ouchi, Noriyuki B; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2013-01-01

    Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.

  2. Separating electroweak and strong interactions in Drell-Yan processes at LHC: leptons angular distributions and reference frames

    International Nuclear Information System (INIS)

    Richter-Was, E.; Was, Z.

    2016-01-01

    Among the physics goals of LHC experiments, precision tests of the Standard Model in the Strong and Electroweak sectors play an important role. Because of nature of the proton-proton processes, observables based on the measurement of the direction and energy of leptons provide the most precise signatures. In the present paper, we concentrate on the angular distribution of Drell-Yan process leptons, in the lepton-pair rest-frame. The vector nature of the intermediate state imposes that distributions are to a good precision described by spherical polynomials of at most second order. We show that with the proper choice of the coordinate frames, only one coefficient in this polynomial decomposition remains sizable, even in the presence of one or two high p T jets. The necessary stochastic choice of the frames relies on probabilities independent from any coupling constants. This remains true when one or two partons accompany the lepton pairs. In this way electroweak effects can be better separated from strong interaction ones for the benefit of the interpretation of the measurements. Our study exploits properties of single gluon emission matrix elements which are clearly visible if a conveniently chosen form of their representation is used. We rely also on distributions obtained from matrix element based Monte Carlo generated samples of events with two leptons and up to two additional partons in test samples. Incoming colliding protons' partons are distributed accordingly to PDFs and are strictly collinear to the corresponding beams. (orig.)

  3. Seasonal and spatial evolution of trihalomethanes in a drinking water distribution system according to the treatment process.

    Science.gov (United States)

    Domínguez-Tello, A; Arias-Borrego, A; García-Barrera, Tamara; Gómez-Ariza, J L

    2015-11-01

    This paper comparatively shows the influence of four water treatment processes on the formation of trihalomethanes (THMs) in a water distribution system. The study was performed from February 2005 to January 2012 with analytical data of 600 samples taken in Aljaraque water treatment plant (WTP) and 16 locations along the water distribution system (WDS) in the region of Andévalo and the coast of Huelva (southwest Spain), a region with significant seasonal and population changes. The comparison of results in the four different processes studied indicated a clear link of the treatment process with the formation of THM along the WDS. The most effective treatment process is preozonation and activated carbon filtration (P3), which is also the most stable under summer temperatures. Experiments also show low levels of THMs with the conventional process of preoxidation with potassium permanganate (P4), delaying the chlorination to the end of the WTP; however, this simple and economical treatment process is less effective and less stable than P3. In this study, strong seasonal variations were obtained (increase of THM from winter to summer of 1.17 to 1.85 times) and a strong spatial variation (1.1 to 1.7 times from WTP to end points of WDS) which largely depends on the treatment process applied. There was also a strong correlation between THM levels and water temperature, contact time and pH. On the other hand, it was found that THM formation is not proportional to the applied chlorine dose in the treatment process, but there is a direct relationship with the accumulated dose of chlorine. Finally, predictive models based on multiple linear regressions are proposed for each treatment process.

  4. Fuel cell plates with skewed process channels for uniform distribution of stack compression load

    Science.gov (United States)

    Granata, Jr., Samuel J.; Woodle, Boyd M.

    1989-01-01

    An electrochemical fuel cell includes an anode electrode, a cathode electrode, an electrolyte matrix sandwiched between electrodes, and a pair of plates above and below the electrodes. The plate above the electrodes has a lower surface with a first group of process gas flow channels formed thereon and the plate below the electrodes has an upper surface with a second group of process gas flow channels formed thereon. The channels of each group extend generally parallel to one another. The improvement comprises the process gas flow channels on the lower surface of the plate above the anode electrode and the process gas flow channels on the upper surface of the plate below the cathode electrode being skewed in opposite directions such that contact areas of the surfaces of the plates through the electrodes are formed in crisscross arrangements. Also, the plates have at least one groove in areas of the surfaces thereof where the channels are absent for holding process gas and increasing electrochemical activity of the fuel cell. The groove in each plate surface intersects with the process channels therein. Also, the opposite surfaces of a bipolar plate for a fuel cell contain first and second arrangements of process gas flow channels in the respective surfaces which are skewed the same amount in opposite directions relative to the longitudinal centerline of the plate.

  5. Enterococcus faecium biofilm formation: identification of major autolysin AtlAEfm, associated Acm surface localization, and AtlAEfm-independent extracellular DNA Release.

    Science.gov (United States)

    Paganelli, Fernanda L; Willems, Rob J L; Jansen, Pamela; Hendrickx, Antoni; Zhang, Xinglin; Bonten, Marc J M; Leavis, Helen L

    2013-04-16

    Enterococcus faecium is an important multidrug-resistant nosocomial pathogen causing biofilm-mediated infections in patients with medical devices. Insight into E. faecium biofilm pathogenesis is pivotal for the development of new strategies to prevent and treat these infections. In several bacteria, a major autolysin is essential for extracellular DNA (eDNA) release in the biofilm matrix, contributing to biofilm attachment and stability. In this study, we identified and functionally characterized the major autolysin of E. faecium E1162 by a bioinformatic genome screen followed by insertional gene disruption of six putative autolysin genes. Insertional inactivation of locus tag EfmE1162_2692 resulted in resistance to lysis, reduced eDNA release, deficient cell attachment, decreased biofilm, decreased cell wall hydrolysis, and significant chaining compared to that of the wild type. Therefore, locus tag EfmE1162_2692 was considered the major autolysin in E. faecium and renamed atlAEfm. In addition, AtlAEfm was implicated in cell surface exposure of Acm, a virulence factor in E. faecium, and thereby facilitates binding to collagen types I and IV. This is a novel feature of enterococcal autolysins not described previously. Furthermore, we identified (and localized) autolysin-independent DNA release in E. faecium that contributes to cell-cell interactions in the atlAEfm mutant and is important for cell separation. In conclusion, AtlAEfm is the major autolysin in E. faecium and contributes to biofilm stability and Acm localization, making AtlAEfm a promising target for treatment of E. faecium biofilm-mediated infections. IMPORTANCE Nosocomial infections caused by Enterococcus faecium have rapidly increased, and treatment options have become more limited. This is due not only to increasing resistance to antibiotics but also to biofilm-associated infections. DNA is released in biofilm matrix via cell lysis, caused by autolysin, and acts as a matrix stabilizer. In this study

  6. PROCESSING, CATALOGUING AND DISTRIBUTION OF UAS IMAGES IN NEAR REAL TIME

    Directory of Open Access Journals (Sweden)

    I. Runkel

    2013-08-01

    Full Text Available Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications – where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security

  7. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    Science.gov (United States)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images

  8. Integrated Sensing and Processing (ISP) Phase II: Demonstration and Evaluation for Distributed Sensor Netowrks and Missile Seeker Systems

    Science.gov (United States)

    2007-02-28

    National Industrial Security Program Operating Manual (NISPOM), Chapter 5, Section 7, or DOD 5200.1-R, Information Security Program Regulation...Sensing and Processing (ISP) Phase II: Demonstration and Evaluation for Distributed Sensor Netowrks and Missile Seeker Systems 5a. CONTRACT NUMBER 5b... SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 41 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT

  9. NEW RARE EARTH ELEMENT ABUNDANCE DISTRIBUTIONS FOR THE SUN AND FIVE r-PROCESS-RICH VERY METAL-POOR STARS

    International Nuclear Information System (INIS)

    Sneden, Christopher; Lawler, James E.; Den Hartog, Elizabeth A.; Cowan, John J.; Ivans, Inese I.

    2009-01-01

    We have derived new abundances of the rare earth elements Pr, Dy, Tm, Yb, and Lu for the solar photosphere and for five very metal-poor, neutron-capture r-process-rich giant stars. The photospheric values for all five elements are in good agreement with meteoritic abundances. For the low-metallicity sample, these abundances have been combined with new Ce abundances from a companion paper, and reconsideration of a few other elements in individual stars, to produce internally consistent Ba, rare earth, and Hf (56 ≤ Z ≤ 72) element distributions. These have been used in a critical comparison between stellar and solar r-process abundance mixes.

  10. Negotiation processes in urban redevelopment projects: dealing with conflicts by balancing integrative and distributive approaches

    NARCIS (Netherlands)

    Baarveld, Marlijn; Smit, Marnix; Dewulf, Geert P.M.R.

    2015-01-01

    Dealing with conflict through dialogue receives considerable attention in current planning approaches. However, debate and negotiation are also inevitable features in the planning of urban redevelopment projects. Insight into the negotiation process contributes to current planning practice as

  11. IOC-UNEP review meeting on oceanographic processes of transport and distribution of pollutants in the sea

    International Nuclear Information System (INIS)

    1991-01-01

    The IOC-UNEP Review Meeting on Oceanographic Processes of Transfer and Distribution of Pollutants in the Sea was opened at the Ruder Boskovic Institute, Zagreb, Yugoslavia on Monday, 15 May 1989. Papers presented at the meeting dealt with physical and geochemical processes in sea-water and sediment in transport mixing and dispersal of pollutants. The importance of mesoscale eddies and gyres in the open sea, wind-driven currents and upwelling events in the coastal zone, and thermohaline processes in semi-enclosed bays and estuaries was recognized. There is strong evidence that non-local forcing can drive circulation in the coastal area. Concentrations, horizontal and vertical distributions and transport of pollutants were investigated and presented for a number of coastal areas. Riverine and atmospheric inputs of different pollutants to the western Mediterranean were discussed. Reports on two on-going nationally/internationally co-ordinated projects (MEDMODEL, EROS 2000) were presented. Discussions during the meeting enabled an exchange of ideas between specialists in different disciplines to be made. It is expected that this will promote the future interdisciplinary approach in this field. The meeting recognized the importance of physical oceanographic studies in investigating the transfer and distribution of pollutants in the sea and in view of the importance of the interdisciplinary approach and bilateral and/or multilateral co-operation a number of recommendations were adopted

  12. Electric ignition energy evaluation and the energy distribution structure of energy released in electrostatic discharge process

    International Nuclear Information System (INIS)

    Liu Qingming; Huang Jinxiang; Shao Huige; Zhang Yunming

    2017-01-01

    Ignition energy is one of the important parameters of flammable materials, and evaluating ignition energy precisely is essential to the safety of process industry and combustion science and technology. By using electric spark discharge test system, a series of electric spark discharge experiments were conducted with the capacitor-stored energy in the range of 10 J, 100 J, and 1000 J, respectively. The evaluation method for energy consumed by electric spark, wire, and switch during capacitor discharge process has been studied respectively. The resistance of wire, switch, and plasma between electrodes has been evaluated by different methods and an optimized evaluation method has been obtained. The electric energy consumed by wire, electric switch, and electric spark-induced plasma between electrodes were obtained and the energy structure of capacitor-released energy was analyzed. The dynamic process and the characteristic parameters (the maximum power, duration of discharge process) of electric spark discharge process have been analyzed. Experimental results showed that, electric spark-consumed energy only accounts for 8%–14% of the capacitor-released energy. With the increase of capacitor-released energy, the duration of discharge process becomes longer, and the energy of plasma accounts for more in the capacitor-released energy. The power of electric spark varies with time as a damped sinusoids function and the period and the maximum value increase with the capacitor-released energy. (paper)

  13. Odysseus/DFS: Integration of DBMS and Distributed File System for Transaction Processing of Big Data

    OpenAIRE

    Kim, Jun-Sung; Whang, Kyu-Young; Kwon, Hyuk-Yoon; Song, Il-Yeol

    2014-01-01

    The relational DBMS (RDBMS) has been widely used since it supports various high-level functionalities such as SQL, schemas, indexes, and transactions that do not exist in the O/S file system. But, a recent advent of big data technology facilitates development of new systems that sacrifice the DBMS functionality in order to efficiently manage large-scale data. Those so-called NoSQL systems use a distributed file system, which support scalability and reliability. They support scalability of the...

  14. Distributed Leadership in Organizational Change Processes: A Qualitative Study in Public Hospital Units

    DEFF Research Database (Denmark)

    Kjeldsen, Anne Mette; Jonasson, Charlotte; Ovesen, Maria

    2015-01-01

    This paper proposes that the emergence and boundaries of distributed leadership (DL) are developed in a dynamic interplay with planned as well as emergent organizational change. The empirical findings are based on a qualitative, longitudinal case study with interviews conducted at two different....../non-routine, various goals, and organizational planning affect a simultaneous widening or restriction of the local DL. In return, such local DL also leads to ongoing changes in the form of novel work routines for improved collaboration. Moreover, the findings show that restrictions of DL are in some cases considered...

  15. A distributed water level network in ephemeral river reaches to identify hydrological processes within anthropogenic catchments

    Science.gov (United States)

    Sarrazin, B.; Braud, I.; Lagouy, M.; Bailly, J. S.; Puech, C.; Ayroles, H.

    2009-04-01

    In order to study the impact of land use change on the water cycle, distributed hydrological models are more and more used, because they have the ability to take into account the land surface heterogeneity and its evolution due to anthropogenic pressure. These models provide continuous distributed simulations of streamflow, runoff, soil moisture, etc, which, ideally, should be evaluated against continuous distributed measurements, taken at various scales and located in nested sub-catchments. Distributed network of streamflow gauging stations are in general scarce and very expensive to maintain. Furthermore, they can hardly be installed in the upstream parts of the catchments where river beds are not well defined. In this paper, we present an alternative to these standard streamflow gauging stations network, based on self powered high resolution water level sensors using a capacitive water height data logger. One of their advantages is that they can be installed even in ephemeral reaches and from channel head locations to high order streams. Furthermore, these innovative and easily adaptable low cost sensors offer the possibility to develop in the near future, a wireless network application. Such a network, including 15 sensors has been set up on nested watersheds in small and intermittent streams of a 7 km² catchment, located in the mountainous "Mont du Lyonnais" area, close to the city of Lyon, France. The land use of this catchment is mostly pasture, crop and forest, but the catchment is significantly affected by human activities, through the existence of a dense roads and paths network and urbanized areas. The equipment provides water levels survey during precipitation events in the hydrological network with a very accurate time step (2 min). Water levels can be related to runoff production and catchment response as a function of scale. This response will depend, amongst other, on variable soil water storage capacity, physiographic data and characteristics of

  16. The process of developing distributed-efficacy and social practice in the context of ‘ending AIDS’

    Directory of Open Access Journals (Sweden)

    Christopher Burman

    2015-07-01

    Full Text Available Introduction: this article reflects on data that emanated from a programme evaluation and focuses on a concept we label ‘distributed-efficacy’. We argue that the process of developing and sustaining ‘distributed-efficacy’ is complex and indeterminate, thus difficult to manage or predict. We situate the discussion within the context of UNAIDS’ recent strategy — Vision 95:95:95 — to ‘end AIDS’ by 2030 which the South African National Department of Health is currently rolling out across the country. Method: A qualitative method was applied. It included a Value Network Analysis, the Most Significant Change technique and a thematic content analysis of factors associated with a ‘competent community’ model. During the analysis it was noticed that there were unexpected references to a shift in social relations. This prompted a re-analysis of the narrative findings using a second thematic content analysis that focused on factors associated with complexity science, the environmental sciences and shifts is social relations. Findings: the efficacy associated with new social practices relating to HIV risk-reduction was distributed amongst networks that included mother—son networks and participant—facilitator networks and included a shift in social relations within these networks. Discussion: it is suggested that for new social practices to emerge requires the establishment of ‘distributed-efficacy’ which facilitates localised social sanctioning, sometimes including shifts in social relations, and this process is a ‘complex’, dialectical interplay between ‘agency’ and ‘structure’. Conclusion: the ambition of ‘ending AIDS’ by 2030 represents a compressed timeframe that will require the uptake of multiple new bio-social practises. This will involve many nonlinear, complex challenges and the process of developing ‘distributed-efficacy’ could play a role in this process. Further research into the factors we

  17. Distribution and stability of Aflatoxin M1 during processing and ripening of traditional white pickled cheese.

    Science.gov (United States)

    Oruc, H H; Cibik, R; Yilmaz, E; Kalkanli, O

    2006-02-01

    The distribution of aflatoxin M(1) (AFM(1)) has been studied between curd, whey, cheese and pickle samples of Turkish white pickled cheese produced according to traditional techniques and its stability studied during the ripening period. Cheeses were produced in three cheese-making trials using raw milk that was artificially contaminated with AFM(1) at the levels of 50, 250 and 750 ng/l and allowed to ripen for three months. AFM(1) determinations were carried out at intervals by LC with fluorescence detection after immunoaffinity column clean-up. During the syneresis of the cheese a proportionately high concentration of AFM(1) remained in curd and for each trial the level was 3.6, 3.8 and 4.0 times higher than levels in milk. At the end of the ripening, the distribution of AFM(1) for cheese/whey + brine samples was 0.9, 1.0 and 1.3 for first, second and third spiking respectively indicating that nearly half of the AFM(1) remained in cheese. It has been found that only 2-4% of the initial spiking of AFM(1) transferred into the brine solution. During the ripening period AFM(1) levels remained constant suggesting that AFM(1) was quite stable during manufacturing and ripening.

  18. Concentration and distribution of elements in plants and soils near phosphate processing factories, Pocatello, Idaho

    International Nuclear Information System (INIS)

    Severson, R.C.; Gough, L.P.

    1976-01-01

    The processing of phosphatic shale near Pocatello, Idaho has a direct influence on the element content of local vegetation and soil. Samples of big sagebrush (Artemisia tridentata Nutt. subsp. tridentata) and cheatgrass (Bromus tectorum L.) show important negative relations between the concentration of certain elements (Cd, Cr, F, Ni, P, Se, U, V, and Zn) and distance from phosphate processing factories. Plant tissues within 3 km of the processing factories contain unusually high amounts of these elements except Ni and Se. Important negative relations with distance were also found for certain elements (Be, F, Fe, K, Li, Pb, Rb, Th, and Zn) in A-horizon soil. Amounts of seven elements (Be, F, Li, Pb, Rb, Th, and Zn) being contributed to the upper 5 cm of the soil by phosphate processing, as well as two additional elements (U and V) suspected as being contributed to soil, were estimated, with F showing the greatest increase (about 300 kg/ha) added to soils as far as 4 km downwind from the factories. The greatest number of important relations for both plants and soils was found downwind (northeast) of the processing factories

  19. Effect of processing conditions on residual stress distributions by bead-on-plate welding after surface machining

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Mochizuki, Masahito

    2014-01-01

    Residual stress is important factor for stress corrosion cracking (SCC) that has been observed near the welded zone in nuclear power plants. Especially, surface residual stress is significant for SCC initiation. In the joining processes of pipes, butt welding is conducted after surface machining. Residual stress is generated by both processes, and residual stress distribution due to surface machining is varied by the subsequent butt welding. In previous paper, authors reported that residual stress distribution generated by bead on plate welding after surface machining has a local maximum residual stress near the weld metal. The local maximum residual stress shows approximately 900 MPa that exceeds the stress threshold for SCC initiation. Therefore, for the safety improvement of nuclear power plants, a study on the local maximum residual stress is important. In this study, the effect of surface machining and welding conditions on residual stress distribution generated by welding after surface machining was investigated. Surface machining using lathe machine and bead on plate welding with tungsten inert gas (TIG) arc under various conditions were conducted for plate specimens made of SUS316L. Then, residual stress distributions were measured by X-ray diffraction method (XRD). As a result, residual stress distributions have the local maximum residual stress near the weld metal in all specimens. The values of the local maximum residual stresses are almost the same. The location of the local maximum residual stress is varied by welding condition. It could be consider that the local maximum residual stress is generated by same generation mechanism as welding residual stress in surface machined layer that has high yield stress. (author)

  20. Monitoring and control of amygdala neurofeedback involves distributed information processing in the human brain.

    Science.gov (United States)

    Paret, Christian; Zähringer, Jenny; Ruf, Matthias; Gerchen, Martin Fungisai; Mall, Stephanie; Hendler, Talma; Schmahl, Christian; Ende, Gabriele

    2018-03-30

    Brain-computer interfaces provide conscious access to neural activity by means of brain-derived feedback ("neurofeedback"). An individual's abilities to monitor and control feedback are two necessary processes for effective neurofeedback therapy, yet their underlying functional neuroanatomy is still being debated. In this study, healthy subjects received visual feedback from their amygdala response to negative pictures. Activation and functional connectivity were analyzed to disentangle the role of brain regions in different processes. Feedback monitoring was mapped to the thalamus, ventromedial prefrontal cortex (vmPFC), ventral striatum (VS), and rostral PFC. The VS responded to feedback corresponding to instructions while rPFC activity differentiated between conditions and predicted amygdala regulation. Control involved the lateral PFC, anterior cingulate, and insula. Monitoring and control activity overlapped in the VS and thalamus. Extending current neural models of neurofeedback, this study introduces monitoring and control of feedback as anatomically dissociated processes, and suggests their important role in voluntary neuromodulation. © 2018 Wiley Periodicals, Inc.