WorldWideScience

Sample records for distributed processing acm

  1. Distribution of the ACME-arcA gene among meticillin-resistant Staphylococcus haemolyticus and identification of a novel ccr allotype in ACME-arcA-positive isolates.

    Science.gov (United States)

    Pi, Borui; Yu, Meihong; Chen, Yagang; Yu, Yunsong; Li, Lanjuan

    2009-06-01

    The aim of this study was to investigate the prevalence and characteristics of ACME (arginine catabolic mobile element)-arcA-positive isolates among meticillin-resistant Staphylococcus haemolyticus (MRSH). ACME-arcA, native arcA and SCCmec elements were detected by PCR. Susceptibilities to 10 antimicrobial agents were compared between ACME-arcA-positive and -negative isolates by chi-square test. PFGE was used to investigate the clonal relatedness of ACME-arcA-positive isolates. The phylogenetic relationships of ACME-arcA and native arcA were analysed using the neighbour-joining methods of mega software. A total of 42 (47.7 %) of 88 isolates distributed in 13 PFGE types were positive for the ACME-arcA gene. There were no significant differences in antimicrobial susceptibility between ACME-arcA-positive and -negative isolates. A novel ccr allotype (ccrAB(SHP)) was identified in ACME-arcA-positive isolates. Among 42 ACME-arcA-positive isolates: 8 isolates harboured SCCmec V, 8 isolates harboured class C1 mec complex and ccrAB(SHP); 22 isolates harbouring class C1 mec complex and 4 isolates harbouring class C2 mec complex were negative for all known ccr allotypes. The ACME-arcA-positive isolates were first found in MRSH with high prevalence and clonal diversity, which suggests a mobility of ACME within MRSH. The results from this study revealed that MRSH is likely to be one of the potential reservoirs of ACME for Staphylococcus aureus.

  2. ACME-III and ACME-IV Final Campaign Reports

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    The goals of the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s third and fourth Airborne Carbon Measurements (ACME) field campaigns, ACME-III and ACME-IV, are: 1) to measure and model the exchange of CO2, water vapor, and other greenhouse gases by the natural, agricultural, and industrial ecosystems of the Southern Great Plains (SGP) region; 2) to develop quantitative approaches to relate these local fluxes to the concentration of greenhouse gases measured at the Central Facility tower and in the atmospheric column above the ARM SGP Central Facility, 3) to develop and test bottom-up measurement and modeling approaches to estimate regional scale carbon balances, and 4) to develop and test inverse modeling approaches to estimate regional scale carbon balance and anthropogenic sources over continental regions. Regular soundings of the atmosphere from near the surface into the mid-troposphere are essential for this research.

  3. ACM CCS 2013-2015 Student Travel Support

    Science.gov (United States)

    2016-10-29

    ACM CCS 2013-2015 Student Travel Support Under the ARO funded effort titled “ACM CCS 2013-2015 Student Travel Support,” from 2013 to 2015, George...Computer and Communications Security (ACM CCS ). The views, opinions and/or findings contained in this report are those of the author(s) and should not...AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 travel grant, acm ccs REPORT

  4. ACM Bundles on Del Pezzo surfaces

    Directory of Open Access Journals (Sweden)

    Joan Pons-Llopis

    2009-11-01

    Full Text Available ACM rank 1 bundles on del Pezzo surfaces are classified in terms of the rational normal curves that they contain. A complete list of ACM line bundles is provided. Moreover, for any del Pezzo surface X of degree less or equal than six and for any n ≥ 2 we construct a family of dimension ≥ n − 1 of non-isomorphic simple ACM bundles of rank n on X.

  5. Porcine bladder acellular matrix (ACM): protein expression, mechanical properties

    International Nuclear Information System (INIS)

    Farhat, Walid A; Chen Jun; Haig, Jennifer; Antoon, Roula; Litman, Jessica; Yeger, Herman; Sherman, Christopher; Derwin, Kathleen

    2008-01-01

    Experimentally, porcine bladder acellular matrix (ACM) that mimics extracellular matrix has excellent potential as a bladder substitute. Herein we investigated the spatial localization and expression of different key cellular and extracellular proteins in the ACM; furthermore, we evaluated the inherent mechanical properties of the resultant ACM prior to implantation. Using a proprietary decellularization method, the DNA contents in both ACM and normal bladder were measured; in addition we used immunohistochemistry and western blots to quantify and localize the different cellular and extracellular components, and finally the mechanical testing was performed using a uniaxial mechanical testing machine. The mean DNA content in the ACM was significantly lower in the ACM compared to the bladder. Furthermore, the immunohistochemical and western blot analyses showed that collagen I and IV were preserved in the ACM, but possibly denatured collagen III in the ACM. Furthermore, elastin, laminin and fibronectin were mildly reduced in the ACM. Although the ACM did not exhibit nucleated cells, residual cellular components (actin, myosin, vimentin and others) were still present. There was, on the other hand, no significant difference in the mean stiffness between the ACM and the bladder. Although our decellularization method is effective in removing nuclear material from the bladder while maintaining its inherent mechanical properties, further work is mandatory to determine whether these residual DNA and cellular remnants would lead to any immune reaction, or if the mechanical properties of the ACM are preserved upon implantation and cellularization

  6. Porcine bladder acellular matrix (ACM): protein expression, mechanical properties.

    Science.gov (United States)

    Farhat, Walid A; Chen, Jun; Haig, Jennifer; Antoon, Roula; Litman, Jessica; Sherman, Christopher; Derwin, Kathleen; Yeger, Herman

    2008-06-01

    Experimentally, porcine bladder acellular matrix (ACM) that mimics extracellular matrix has excellent potential as a bladder substitute. Herein we investigated the spatial localization and expression of different key cellular and extracellular proteins in the ACM; furthermore, we evaluated the inherent mechanical properties of the resultant ACM prior to implantation. Using a proprietary decellularization method, the DNA contents in both ACM and normal bladder were measured; in addition we used immunohistochemistry and western blots to quantify and localize the different cellular and extracellular components, and finally the mechanical testing was performed using a uniaxial mechanical testing machine. The mean DNA content in the ACM was significantly lower in the ACM compared to the bladder. Furthermore, the immunohistochemical and western blot analyses showed that collagen I and IV were preserved in the ACM, but possibly denatured collagen III in the ACM. Furthermore, elastin, laminin and fibronectin were mildly reduced in the ACM. Although the ACM did not exhibit nucleated cells, residual cellular components (actin, myosin, vimentin and others) were still present. There was, on the other hand, no significant difference in the mean stiffness between the ACM and the bladder. Although our decellularization method is effective in removing nuclear material from the bladder while maintaining its inherent mechanical properties, further work is mandatory to determine whether these residual DNA and cellular remnants would lead to any immune reaction, or if the mechanical properties of the ACM are preserved upon implantation and cellularization.

  7. Porcine bladder acellular matrix (ACM): protein expression, mechanical properties

    Energy Technology Data Exchange (ETDEWEB)

    Farhat, Walid A [Department of Surgery, Division of Urology, University of Toronto and Hospital for Sick Children, Toronto, ON M5G 1X8 (Canada); Chen Jun; Haig, Jennifer; Antoon, Roula; Litman, Jessica; Yeger, Herman [Department of Developmental and Stem Cell Biology, Research Institute, Hospital for Sick Children, Toronto, ON M5G 1X8 (Canada); Sherman, Christopher [Department of Anatomic Pathology, Sunnybrook and Women' s College Health Sciences Centre, Toronto, ON (Canada); Derwin, Kathleen [Department of Biomedical Engineering, Lerner Research Institute and Orthopaedic Research Center, Cleveland Clinic Foundation, Cleveland, OH (United States)], E-mail: walid.farhat@sickkids.ca

    2008-06-01

    Experimentally, porcine bladder acellular matrix (ACM) that mimics extracellular matrix has excellent potential as a bladder substitute. Herein we investigated the spatial localization and expression of different key cellular and extracellular proteins in the ACM; furthermore, we evaluated the inherent mechanical properties of the resultant ACM prior to implantation. Using a proprietary decellularization method, the DNA contents in both ACM and normal bladder were measured; in addition we used immunohistochemistry and western blots to quantify and localize the different cellular and extracellular components, and finally the mechanical testing was performed using a uniaxial mechanical testing machine. The mean DNA content in the ACM was significantly lower in the ACM compared to the bladder. Furthermore, the immunohistochemical and western blot analyses showed that collagen I and IV were preserved in the ACM, but possibly denatured collagen III in the ACM. Furthermore, elastin, laminin and fibronectin were mildly reduced in the ACM. Although the ACM did not exhibit nucleated cells, residual cellular components (actin, myosin, vimentin and others) were still present. There was, on the other hand, no significant difference in the mean stiffness between the ACM and the bladder. Although our decellularization method is effective in removing nuclear material from the bladder while maintaining its inherent mechanical properties, further work is mandatory to determine whether these residual DNA and cellular remnants would lead to any immune reaction, or if the mechanical properties of the ACM are preserved upon implantation and cellularization.

  8. AcmD, a homolog of the major autolysin AcmA of Lactococcus lactis, binds to the cell wall and contributes to cell separation and autolysis

    NARCIS (Netherlands)

    Visweswaran, Ganesh Ram R; Steen, Anton; Leenhouts, Kees; Szeliga, Monika; Ruban, Beata; Hesseling-Meinders, Anne; Dijkstra, Bauke W; Kuipers, Oscar P; Kok, Jan; Buist, Girbe

    2013-01-01

    Lactococcus lactis expresses the homologous glucosaminidases AcmB, AcmC, AcmA and AcmD. The latter two have three C-terminal LysM repeats for peptidoglycan binding. AcmD has much shorter intervening sequences separating the LysM repeats and a lower iso-electric point (4.3) than AcmA (10.3). Under

  9. Quark ACM with topologically generated gluon mass

    Science.gov (United States)

    Choudhury, Ishita Dutta; Lahiri, Amitabha

    2016-03-01

    We investigate the effect of a small, gauge-invariant mass of the gluon on the anomalous chromomagnetic moment (ACM) of quarks by perturbative calculations at one-loop level. The mass of the gluon is taken to have been generated via a topological mass generation mechanism, in which the gluon acquires a mass through its interaction with an antisymmetric tensor field Bμν. For a small gluon mass ( ACM at momentum transfer q2 = -M Z2. We compare those with the ACM calculated for the gluon mass arising from a Proca mass term. We find that the ACM of up, down, strange and charm quarks vary significantly with the gluon mass, while the ACM of top and bottom quarks show negligible gluon mass dependence. The mechanism of gluon mass generation is most important for the strange quarks ACM, but not so much for the other quarks. We also show the results at q2 = -m t2. We find that the dependence on gluon mass at q2 = -m t2 is much less than at q2 = -M Z2 for all quarks.

  10. Porosity of porcine bladder acellular matrix: impact of ACM thickness.

    Science.gov (United States)

    Farhat, Walid; Chen, Jun; Erdeljan, Petar; Shemtov, Oren; Courtman, David; Khoury, Antoine; Yeger, Herman

    2003-12-01

    The objectives of this study are to examine the porosity of bladder acellular matrix (ACM) using deionized (DI) water as the model fluid and dextran as the indicator macromolecule, and to correlate the porosity to the ACM thickness. Porcine urinary bladders from pigs weighing 20-50 kg were sequentially extracted in detergent containing solutions, and to modify the ACM thickness, stretched bladders were acellularized in the same manner. Luminal and abluminal ACM specimens were subjected to fixed static DI water pressure (10 cm); and water passing through the specimens was collected at specific time interval. While for the macromolecule porosity testing, the diffusion rate and direction of 10,000 MW fluoroescein-labeled dextrans across the ACM specimens mounted in Ussing's chambers were measured. Both experiments were repeated on the thin stretched ACM. In both ACM types, the fluid porosity in both directions did not decrease with increased test duration (3 h); in addition, the abluminal surface was more porous to fluid than the luminal surface. On the other hand, when comparing thin to thick ACM, the porosity in either direction was higher in the thick ACM. Macromolecule porosity, as measured by absorbance, was higher for the abluminal thick ACM than the luminal side, but this characteristic was reversed in the thin ACM. Comparing thin to thick ACM, the luminal side in the thin ACM was more porous to dextran than in the thick ACM, but this characteristic was reversed for the abluminal side. The porcine bladder ACM possesses directional porosity and acellularizing stretched urinary bladders may increase structural density and alter fluid and macromolecule porosity. Copyright 2003 Wiley Periodicals, Inc. J Biomed Mater Res 67A: 970-974, 2003

  11. Representation of deforestation impacts on climate, water, and nutrient cycles in the ACME earth system model

    Science.gov (United States)

    Cai, X.; Riley, W. J.; Zhu, Q.

    2017-12-01

    Deforestation causes a series of changes to the climate, water, and nutrient cycles. Employing a state-of-the-art earth system model—ACME (Accelerated Climate Modeling for Energy), we comprehensively investigate the impacts of deforestation on these processes. We first assess the performance of the ACME Land Model (ALM) in simulating runoff, evapotranspiration, albedo, and plant productivity at 42 FLUXNET sites. The single column mode of ACME is then used to examine climate effects (temperature cooling/warming) and responses of runoff, evapotranspiration, and nutrient fluxes to deforestation. This approach separates local effects of deforestation from global circulation effects. To better understand the deforestation effects in a global context, we use the coupled (atmosphere, land, and slab ocean) mode of ACME to demonstrate the impacts of deforestation on global climate, water, and nutrient fluxes. Preliminary results showed that the land component of ACME has advantages in simulating these processes and that local deforestation has potentially large impacts on runoff and atmospheric processes.

  12. HPDC ´12 : proceedings of the 21st ACM symposium on high-performance parallel and distributed computing, June 18-22, 2012, Delft, The Netherlands

    NARCIS (Netherlands)

    Epema, D.H.J.; Kielmann, T.; Ripeanu, M.

    2012-01-01

    Welcome to ACM HPDC 2012! This is the twenty-first year of HPDC and we are pleased to report that our community continues to grow in size, quality and reputation. The program consists of three days packed with presentations on the latest developments in high-performance parallel and distributed

  13. Additive Construction with Mobile Emplacement (ACME)

    Science.gov (United States)

    Vickers, John

    2015-01-01

    The Additive Construction with Mobile Emplacement (ACME) project is developing technology to build structures on planetary surfaces using in-situ resources. The project focuses on the construction of both 2D (landing pads, roads, and structure foundations) and 3D (habitats, garages, radiation shelters, and other structures) infrastructure needs for planetary surface missions. The ACME project seeks to raise the Technology Readiness Level (TRL) of two components needed for planetary surface habitation and exploration: 3D additive construction (e.g., contour crafting), and excavation and handling technologies (to effectively and continuously produce in-situ feedstock). Additionally, the ACME project supports the research and development of new materials for planetary surface construction, with the goal of reducing the amount of material to be launched from Earth.

  14. Formation of personality’s acme-qualities as a component of physical education specialists’ acmeological competence

    Directory of Open Access Journals (Sweden)

    T.Hr. Dereka

    2016-10-01

    Full Text Available Purpose: to determine characteristics of acme-qualities’ formation in physical education specialists and determine correlations between components. Material: in the research students of “Physical education” specialty (n=194 participated. For assessment personality’s qualities special tests were used. Organization abilities, communicative abilities, creative potential, demand in achievement, emotional information level, control of emotions and etc. were assessed. Results: we determined components of personality’s acme-competence component in physical education specialists. We found density and orientation of correlation and influence of acme-qualities on personality’s component. By the results of factorial analysis we grouped, classified components by four factors and created their visual picture. The accumulated percentage of the studied factors’ dispersion was determined. Conclusions: continuous professional training of physical education specialists on acme-principles resulted in formation of personality’s acme-qualities. They facilitate manifestation of personality’s activity in the process of professional formation and constant self-perfection.

  15. CLIC-ACM: Acquisition and Control System

    CERN Document Server

    Bielawski, B; Magnoni, S

    2014-01-01

    CLIC [1] (Compact Linear Collider) is a world-wide collaboration to study the next terascale lepton collider, relying upon a very innovative concept of two-beamacceleration. In this scheme, the power is transported to the main accelerating structures by a primary electron beam. The Two Beam Module (TBM) is a compact integration with a high filling factor of all components: RF, Magnets, Instrumentation, Vacuum, Alignment and Stabilization. This paper describes the very challenging aspects of designing the compact system to serve as a dedicated Acquisition & Control Module (ACM) for all signals of the TBM. Very delicate conditions must be considered, in particular radiation doses that could reach several kGy in the tunnel. In such severe conditions shielding and hardened electronics will have to be taken into consideration. In addition, with more than 300 ADC&DAC channels per ACM and about 21000 ACMs in total, it appears clearly that power consumption will be an important issue. It is also obvious that...

  16. Transient Inverse Calibration of the Site-Wide Groundwater Flow Model (ACM-2): FY03 Progress Report

    International Nuclear Information System (INIS)

    Vermeul, Vince R.; Bergeron, Marcel P.; Cole, C R.; Murray, Christopher J.; Nichols, William E.; Scheibe, Timothy D.; Thorne, Paul D.; Waichler, Scott R.; Xie, YuLong

    2003-01-01

    DOE and PNNL are working to strengthen the technical defensibility of the groundwater flow and transport model at the Hanford Site and to incorporate uncertainty into the model. One aspect of the initiative is developing and using a three-dimensional transient inverse model to estimate the hydraulic conductivities, specific yields, and other parameters using data from Hanford since 1943. The focus of the alternative conceptual model (ACM-2) inverse modeling initiative documented in this report was to address limitations identified in the ACM-1 model, complete the facies-based approach for representing the hydraulic conductivity distribution in the Hanford and middle Ringold Formations, develop the approach and implementation methodology for generating multiple ACMs based on geostatistical data analysis, and develop an approach for inverse modeling of these stochastic ACMs. The primary modifications to ACM-2 transient inverse model include facies-based zonation of Units 1 (Hanford ) and 5 (middle Ringold); an improved approach for handling run-on recharge from upland areas based on watershed modeling results; an improved approach for representing artificial discharges from site operations; and minor changes to the geologic conceptual model. ACM-2 is the first attempt to fully incorporate the facies-based approach to represent the hydrogeologic structure. Further refinement and additional improvements to overall model fit will be realized during future inverse simulations of groundwater flow and transport. In addition, preliminary work was completed on an approach and implementation for generating an inverse modeling of stochastic ACMs. These techniques were applied to assess the uncertainty in the facies-based zonation of the Hanford formation and the geological structure of Ringold mud units. The geostatistical analysis used a preliminary interpretation of the facies-based zonation that was not consistent with that used in ACM-2. Although the overall objective of

  17. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  18. Improving simulated spatial distribution of productivity and biomass in Amazon forests using the ACME land model

    Science.gov (United States)

    Yang, X.; Thornton, P. E.; Ricciuto, D. M.; Shi, X.; Xu, M.; Hoffman, F. M.; Norby, R. J.

    2017-12-01

    Tropical forests play a crucial role in the global carbon cycle, accounting for one third of the global NPP and containing about 25% of global vegetation biomass and soil carbon. This is particularly true for tropical forests in the Amazon region, as it comprises approximately 50% of the world's tropical forests. It is therefore important for us to understand and represent the processes that determine the fluxes and storage of carbon in these forests. In this study, we show that the implementation of phosphorus (P) cycle and P limitation in the ACME Land Model (ALM) improves simulated spatial pattern of NPP. The P-enabled ALM is able to capture the west-to-east gradient of productivity, consistent with field observations. We also show that by improving the representation of mortality processes, ALM is able to reproduce the observed spatial pattern of above ground biomass across the Amazon region.

  19. Analysis on working pressure selection of ACME integral test facility

    International Nuclear Information System (INIS)

    Chen Lian; Chang Huajian; Li Yuquan; Ye Zishen; Qin Benke

    2011-01-01

    An integral effects test facility, advanced core cooling mechanism experiment facility (ACME) was designed to verify the performance of the passive safety system and validate its safety analysis codes of a pressurized water reactor power plant. Three test facilities for AP1000 design were introduced and review was given. The problems resulted from the different working pressures of its test facilities were analyzed. Then a detailed description was presented on the working pressure selection of ACME facility as well as its characteristics. And the approach of establishing desired testing initial condition was discussed. The selected 9.3 MPa working pressure covered almost all important passive safety system enables the ACME to simulate the LOCAs with the same pressure and property similitude as the prototype. It's expected that the ACME design would be an advanced core cooling integral test facility design. (authors)

  20. News from the Library: A one-stop-shop for computing literature: ACM Digital Library

    CERN Multimedia

    CERN Library

    2011-01-01

    The Association for Computing Machinery, ACM, is the world’s largest educational and scientific computing society. Among others, the ACM provides the computing field's premier Digital Library and serves its members and the computing profession with leading-edge publications, conferences, and career resources.   ACM Digital Library is available to the CERN community. The most popular journal here at CERN is Communications of the ACM. However, the collection offers access to a series of other valuable important academic journals such as Journal of the ACM and even fulltext of a series of classical books. In addition, users have access to the ACM Guide to Computing Literature, the most comprehensive bibliographic database focusing on computing, integrated with ACM’s full-text articles and including features such as ACM Author Profile Pages - which provides bibliographic and bibliometric data for over 1,000,000 authors in the field. ACM Digital Library is an excellent com...

  1. Characterization of a Novel Arginine Catabolic Mobile Element (ACME) and Staphylococcal Chromosomal Cassette mec Composite Island with Significant Homology to Staphylococcus epidermidis ACME type II in Methicillin-Resistant Staphylococcus aureus Genotype ST22-MRSA-IV.

    LENUS (Irish Health Repository)

    Shore, Anna C

    2011-02-22

    The arginine catabolic mobile element (ACME) is prevalent among ST8-MRSA-IVa (USA300) isolates and evidence suggests that ACME enhances the ability of ST8-MRSA-IVa to grow and survive on its host. ACME has been identified in a small number of isolates belonging to other MRSA clones but is widespread among coagulase-negative staphylococci (CoNS). This study reports the first description of ACME in two distinct strains of the pandemic ST22-MRSA-IV clone. A total of 238 MRSA isolates recovered in Ireland between 1971 and 2008 were investigated for ACME using a DNA microarray. Twenty-three isolates (9.7%) were ACME-positive, all were either MRSA genotype ST8-MRSA-IVa (7\\/23, 30%) or ST22-MRSA-IV (16\\/23, 70%). Whole-genome sequencing and comprehensive molecular characterization revealed the presence of a novel 46-kb ACME and SCCmec composite island (ACME\\/SCCmec-CI) in ST22-MRSA-IVh isolates (n = 15). This ACME\\/SCCmec-CI consists of a 12-kb DNA region previously identified in ACME type II in S. epidermidis ATCC 12228, a truncated copy of the J1 region of SCCmec I and a complete SCCmec IVh element. The composite island has a novel genetic organization with ACME located within orfX and SCCmec located downstream of ACME. One pvl-positive ST22-MRSA-IVa isolate carried ACME located downstream of SCCmec IVa as previously described in ST8-MRSA-IVa. These results suggest that ACME has been acquired by ST22-MRSA-IV on two independent occasions. At least one of these instances may have involved horizontal transfer and recombination events between MRSA and CoNS. The presence of ACME may enhance dissemination of ST22-MRSA-IV, an already successful MRSA clone.

  2. Contribution of the collagen adhesin Acm to pathogenesis of Enterococcus faecium in experimental endocarditis.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Singh, Kavindra V; Murray, Barbara E

    2008-09-01

    Enterococcus faecium is a multidrug-resistant opportunist causing difficult-to-treat nosocomial infections, including endocarditis, but there are no reports experimentally demonstrating E. faecium virulence determinants. Our previous studies showed that some clinical E. faecium isolates produce a cell wall-anchored collagen adhesin, Acm, and that an isogenic acm deletion mutant of the endocarditis-derived strain TX0082 lost collagen adherence. In this study, we show with a rat endocarditis model that TX0082 Deltaacm::cat is highly attenuated versus wild-type TX0082, both in established (72 h) vegetations (P Acm the first factor shown to be important for E. faecium pathogenesis. In contrast, no mortality differences were observed in a mouse peritonitis model. While 5 of 17 endocarditis isolates were Acm nonproducers and failed to adhere to collagen in vitro, all had an intact, highly conserved acm locus. Highly reduced acm mRNA levels (>or=50-fold reduction relative to an Acm producer) were found in three of these five nonadherent isolates, including the sequenced strain TX0016, by quantitative reverse transcription-PCR, indicating that acm transcription is downregulated in vitro in these isolates. However, examination of TX0016 cells obtained directly from infected rat vegetations by flow cytometry showed that Acm was present on 40% of cells grown during infection. Finally, we demonstrated a significant reduction in E. faecium collagen adherence by affinity-purified anti-Acm antibodies from E. faecium endocarditis patient sera, suggesting that Acm may be a potential immunotarget for strategies to control this emerging pathogen.

  3. Asbestos-Containing Materials (ACM) and Demolition

    Science.gov (United States)

    There are specific federal regulatory requirements that require the identification of asbestos-containing materials (ACM) in many of the residential buildings that are being demolished or renovated by a municipality.

  4. Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness

    DEFF Research Database (Denmark)

    These proceedings contain the papers selected for presentation at the Second International Workshop on Indoor Spatial Awareness, hosted by ACM SIGSPATIAL and held in conjunction with the18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS...

  5. De afschaffing van de bezwaarfase bij boetebesluiten van de ACM

    NARCIS (Netherlands)

    Jans, J.H.; Outhuijse, A.

    Per 1 maart 2013 ontstaat door samenvoeging van de NMa, de OPTA en de Consumentenautoriteit, de Autoriteit Consument en Markt. Om de ACM slagvaardig te laten functioneren, wordt voorgesteld het handhavingsinstrumentarium m.b.t. het markttoezicht van de ACM te vereenvoudigen. Eén van de voorstellen

  6. ACME: A Basis for Architecture Exchange

    National Research Council Canada - National Science Library

    Wile, David

    2003-01-01

    .... It remains useful in that role, but since the project's inception the Acme language and its support toolkit have grown into a solid foundation upon which new software architecture design and analysis...

  7. Characterization of a novel arginine catabolic mobile element (ACME) and staphylococcal chromosomal cassette mec composite island with significant homology to Staphylococcus epidermidis ACME type II in methicillin-resistant Staphylococcus aureus genotype ST22-MRSA-IV.

    LENUS (Irish Health Repository)

    Shore, Anna C

    2011-05-01

    The arginine catabolic mobile element (ACME) is prevalent among methicillin-resistant Staphylococcus aureus (MRSA) isolates of sequence type 8 (ST8) and staphylococcal chromosomal cassette mec (SCCmec) type IVa (USA300) (ST8-MRSA-IVa isolates), and evidence suggests that ACME enhances the ability of ST8-MRSA-IVa to grow and survive on its host. ACME has been identified in a small number of isolates belonging to other MRSA clones but is widespread among coagulase-negative staphylococci (CoNS). This study reports the first description of ACME in two distinct strains of the pandemic ST22-MRSA-IV clone. A total of 238 MRSA isolates recovered in Ireland between 1971 and 2008 were investigated for ACME using a DNA microarray. Twenty-three isolates (9.7%) were ACME positive, and all were either MRSA genotype ST8-MRSA-IVa (7\\/23, 30%) or MRSA genotype ST22-MRSA-IV (16\\/23, 70%). Whole-genome sequencing and comprehensive molecular characterization revealed the presence of a novel 46-kb ACME and staphylococcal chromosomal cassette mec (SCCmec) composite island (ACME\\/SCCmec-CI) in ST22-MRSA-IVh isolates (n=15). This ACME\\/SCCmec-CI consists of a 12-kb DNA region previously identified in ACME type II in S. epidermidis ATCC 12228, a truncated copy of the J1 region of SCCmec type I, and a complete SCCmec type IVh element. The composite island has a novel genetic organization, with ACME located within orfX and SCCmec located downstream of ACME. One PVL locus-positive ST22-MRSA-IVa isolate carried ACME located downstream of SCCmec type IVa, as previously described in ST8-MRSA-IVa. These results suggest that ACME has been acquired by ST22-MRSA-IV on two independent occasions. At least one of these instances may have involved horizontal transfer and recombination events between MRSA and CoNS. The presence of ACME may enhance dissemination of ST22-MRSA-IV, an already successful MRSA clone.

  8. Highlights from ACM SIGSPATIAL GIS 2011: the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems: (Chicago, Illinois - November 1 - 4, 2011)

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Ofek, Eyal; Tanin, Egemen

    2012-01-01

    ACM SIGSPATIAL GIS 2011 was the 19th gathering of the premier event on spatial information and Geographic Information Systems (GIS). It is also the fourth year that the conference was held under the auspices of ACM's most recent special interest group, SIGSPATIAL. Since its start in 1993, the con...

  9. Automated software system for checking the structure and format of ACM SIG documents

    Science.gov (United States)

    Mirza, Arsalan Rahman; Sah, Melike

    2017-04-01

    Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.

  10. Proceedings of the 2014 ACM international conference on Interactive experiences for TV and online video

    NARCIS (Netherlands)

    P. Olivier (Patrick); P. Wright; T. Bartindale; M. Obrist (Marianna); P.S. Cesar Garcia (Pablo Santiago); S. Basapur

    2014-01-01

    htmlabstractIt is our great pleasure to introduce the 2014 ACM International Conference on Interactive Experiences for Television and Online Video -- ACM TVX 2014. ACM TVX is a leading annual conference that brings together international researchers and practitioners from a wide range of

  11. Inhibition of Enterococcus faecium adherence to collagen by antibodies against high-affinity binding subdomains of Acm.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Sillanpää, Jouko; Ganesh, Vannakambadi K; Höök, Magnus; Murray, Barbara E

    2007-06-01

    Strains of Enterococcus faecium express a cell wall-anchored protein, Acm, which mediates adherence to collagen. Here, we (i) identify the minimal and high-affinity binding subsegments of Acm and (ii) show that anti-Acm immunoglobulin Gs (IgGs) purified against these subsegments reduced E. faecium TX2535 strain collagen adherence up to 73 and 50%, respectively, significantly more than the total IgGs against the full-length Acm A domain (28%) (P Acm adherence with functional subsegment-specific antibodies raises the possibility of their use as therapeutic or prophylactic agents.

  12. Air Traffic Complexity Measurement Environment (ACME): Software User's Guide

    Science.gov (United States)

    1996-01-01

    A user's guide for the Air Traffic Complexity Measurement Environment (ACME) software is presented. The ACME consists of two major components, a complexity analysis tool and user interface. The Complexity Analysis Tool (CAT) analyzes complexity off-line, producing data files which may be examined interactively via the Complexity Data Analysis Tool (CDAT). The Complexity Analysis Tool is composed of three independently executing processes that communicate via PVM (Parallel Virtual Machine) and Unix sockets. The Runtime Data Management and Control process (RUNDMC) extracts flight plan and track information from a SAR input file, and sends the information to GARP (Generate Aircraft Routes Process) and CAT (Complexity Analysis Task). GARP in turn generates aircraft trajectories, which are utilized by CAT to calculate sector complexity. CAT writes flight plan, track and complexity data to an output file, which can be examined interactively. The Complexity Data Analysis Tool (CDAT) provides an interactive graphic environment for examining the complexity data produced by the Complexity Analysis Tool (CAT). CDAT can also play back track data extracted from System Analysis Recording (SAR) tapes. The CDAT user interface consists of a primary window, a controls window, and miscellaneous pop-ups. Aircraft track and position data is displayed in the main viewing area of the primary window. The controls window contains miscellaneous control and display items. Complexity data is displayed in pop-up windows. CDAT plays back sector complexity and aircraft track and position data as a function of time. Controls are provided to start and stop playback, adjust the playback rate, and reposition the display to a specified time.

  13. 76 FR 64943 - Proposed Cercla Administrative Cost Recovery Settlement; ACM Smelter and Refinery Site, Located...

    Science.gov (United States)

    2011-10-19

    ... Settlement; ACM Smelter and Refinery Site, Located in Cascade County, MT AGENCY: Environmental Protection... projected future response costs concerning the ACM Smelter and Refinery NPL Site (Site), Operable Unit 1..., Helena, MT 59626. Mr. Sturn can be reached at (406) 457-5027. Comments should reference the ACM Smelter...

  14. A Novel Observation-Guided Approach for Evaluating Mesoscale Convective Systems Simulated by the DOE ACME Model

    Science.gov (United States)

    Feng, Z.; Ma, P. L.; Hardin, J. C.; Houze, R.

    2017-12-01

    Mesoscale convective systems (MCSs) are the largest type of convective storms that develop when convection aggregates and induces mesoscale circulation features. Over North America, MCSs contribute over 60% of the total warm-season precipitation and over half of the extreme daily precipitation in the central U.S. Our recent study (Feng et al. 2016) found that the observed increases in springtime total and extreme rainfall in this region are dominated by increased frequency and intensity of long-lived MCSs*. To date, global climate models typically do not run at a resolution high enough to explicitly simulate individual convective elements and may not have adequate process representations for MCSs, resulting in a large deficiency in projecting changes of the frequency of extreme precipitation events in future climate. In this study, we developed a novel observation-guided approach specifically designed to evaluate simulated MCSs in the Department of Energy's climate model, Accelerated Climate Modeling for Energy (ACME). The ACME model has advanced treatments for convection and subgrid variability and for this study is run at 25 km and 100 km grid spacings. We constructed a robust MCS database consisting of over 500 MCSs from 3 warm-season observations by applying a feature-tracking algorithm to 4-km resolution merged geostationary satellite and 3-D NEXRAD radar network data over the Continental US. This high-resolution MCS database is then down-sampled to the 25 and 100 km ACME grids to re-characterize key MCS properties. The feature-tracking algorithm is adapted with the adjusted characteristics to identify MCSs from ACME model simulations. We demonstrate that this new analysis framework is useful for evaluating ACME's warm-season precipitation statistics associated with MCSs, and provides insights into the model process representations related to extreme precipitation events for future improvement. *Feng, Z., L. R. Leung, S. Hagos, R. A. Houze, C. D. Burleyson

  15. Study on the percent of frequency of ACME-Arca in clinical isolates ...

    African Journals Online (AJOL)

    ACME is a mobile element of Arginine catabolic in Staphylococcus epidermidis that codes specific virulence factors. The purpose of this study was to examine the specific features and prevalence of ACME-arcA in the isolates of Staphylococcus epidermidis resistant to Methicillin isolated by clinical samples in Isfahan.

  16. On-resin conversion of Cys(Acm)-containing peptides to their corresponding Cys(Scm) congeners.

    Science.gov (United States)

    Mullen, Daniel G; Weigel, Benjamin; Barany, George; Distefano, Mark D

    2010-05-01

    The Acm protecting group for the thiol functionality of cysteine is removed under conditions (Hg(2+)) that are orthogonal to the acidic milieu used for global deprotection in Fmoc-based solid-phase peptide synthesis. This use of a toxic heavy metal for deprotection has limited the usefulness of Acm in peptide synthesis. The Acm group may be converted to the Scm derivative that can then be used as a reactive intermediate for unsymmetrical disulfide formation. It may also be removed by mild reductive conditions to generate unprotected cysteine. Conversion of Cys(Acm)-containing peptides to their corresponding Cys(Scm) derivatives in solution is often problematic because the sulfenyl chloride reagent used for this conversion may react with the sensitive amino acids tyrosine and tryptophan. In this protocol, we report a method for on-resin Acm to Scm conversion that allows the preparation of Cys(Scm)-containing peptides under conditions that do not modify other amino acids. (c) 2010 European Peptide Society and John Wiley & Sons, Ltd.

  17. CNTF-ACM promotes mitochondrial respiration and oxidative stress in cortical neurons through upregulating L-type calcium channel activity.

    Science.gov (United States)

    Sun, Meiqun; Liu, Hongli; Xu, Huanbai; Wang, Hongtao; Wang, Xiaojing

    2016-09-01

    A specialized culture medium termed ciliary neurotrophic factor-treated astrocyte-conditioned medium (CNTF-ACM) allows investigators to assess the peripheral effects of CNTF-induced activated astrocytes upon cultured neurons. CNTF-ACM has been shown to upregulate neuronal L-type calcium channel current activity, which has been previously linked to changes in mitochondrial respiration and oxidative stress. Therefore, the aim of this study was to evaluate CNTF-ACM's effects upon mitochondrial respiration and oxidative stress in rat cortical neurons. Cortical neurons, CNTF-ACM, and untreated control astrocyte-conditioned medium (UC-ACM) were prepared from neonatal Sprague-Dawley rat cortical tissue. Neurons were cultured in either CNTF-ACM or UC-ACM for a 48-h period. Changes in the following parameters before and after treatment with the L-type calcium channel blocker isradipine were assessed: (i) intracellular calcium levels, (ii) mitochondrial membrane potential (ΔΨm), (iii) oxygen consumption rate (OCR) and adenosine triphosphate (ATP) formation, (iv) intracellular nitric oxide (NO) levels, (v) mitochondrial reactive oxygen species (ROS) production, and (vi) susceptibility to the mitochondrial complex I toxin rotenone. CNTF-ACM neurons displayed the following significant changes relative to UC-ACM neurons: (i) increased intracellular calcium levels (p ACM (p ACM promotes mitochondrial respiration and oxidative stress in cortical neurons through elevating L-type calcium channel activity.

  18. The acellular matrix (ACM) for bladder tissue engineering: A quantitative magnetic resonance imaging study.

    Science.gov (United States)

    Cheng, Hai-Ling Margaret; Loai, Yasir; Beaumont, Marine; Farhat, Walid A

    2010-08-01

    Bladder acellular matrices (ACMs) derived from natural tissue are gaining increasing attention for their role in tissue engineering and regeneration. Unlike conventional scaffolds based on biodegradable polymers or gels, ACMs possess native biomechanical and many acquired biologic properties. Efforts to optimize ACM-based scaffolds are ongoing and would be greatly assisted by a noninvasive means to characterize scaffold properties and monitor interaction with cells. MRI is well suited to this role, but research with MRI for scaffold characterization has been limited. This study presents initial results from quantitative MRI measurements for bladder ACM characterization and investigates the effects of incorporating hyaluronic acid, a natural biomaterial useful in tissue-engineering and regeneration. Measured MR relaxation times (T(1), T(2)) and diffusion coefficient were consistent with increased water uptake and glycosaminoglycan content observed on biochemistry in hyaluronic acid ACMs. Multicomponent MRI provided greater specificity, with diffusion data showing an acellular environment and T(2) components distinguishing the separate effects of increased glycosaminoglycans and hydration. These results suggest that quantitative MRI may provide useful information on matrix composition and structure, which is valuable in guiding further development using bladder ACMs for organ regeneration and in strategies involving the use of hyaluronic acid.

  19. VANET '13: Proceeding of the Tenth ACM International Workshop on Vehicular Inter-networking, Systems, and Applications

    NARCIS (Netherlands)

    Gozalvez, J.; Kargl, Frank; Mittag, J.; Kravets, R.; Tsai, M.; Unknown, [Unknown

    This year marks a very important date for the ACM international workshop on Vehicular inter-networking, systems, and applications as ACM VANET celebrates now its 10th edition. Starting in 2004 as "ACM international workshop on Vehicular ad hoc networks" already the change in title indicates that

  20. Clinical isolates of Enterococcus faecium exhibit strain-specific collagen binding mediated by Acm, a new member of the MSCRAMM family.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Weinstock, George M; Murray, Barbara E

    2003-03-01

    A collagen-binding adhesin of Enterococcus faecium, Acm, was identified. Acm shows 62% similarity to the Staphylococcus aureus collagen adhesin Cna over the entire protein and is more similar to Cna (60% and 75% similarity with Cna A and B domains respectively) than to the Enterococcus faecalis collagen-binding adhesin, Ace, which shares homology with Acm only in the A domain. Despite the detection of acm in 32 out of 32 E. faecium isolates, only 11 of these (all clinical isolates, including four vancomycin-resistant endocarditis isolates and seven other isolates) exhibited binding to collagen type I (CI). Although acm from three CI-binding vancomycin-resistant E. faecium clinical isolates showed 100% identity, analysis of acm genes and their promoter regions from six non-CI-binding strains identified deletions or mutations that introduced stop codons and/or IS elements within the gene or the promoter region in five out of six strains, suggesting that the presence of an intact functional acm gene is necessary for binding of E. faecium strains to CI. Recombinant Acm A domain showed specific and concentration-dependent binding to collagen, and this protein competed with E. faecium binding to immobilized CI. Consistent with the adherence phenotype and sequence data, probing with Acm-specific IgGs purified from anti-recombinant Acm A polyclonal rabbit serum confirmed the surface expression of Acm in three out of three collagen-binding clinical isolates of E. faecium tested, but in none of the strains with a non-functional pseudo acm gene. Introduction of a functional acm gene into two non-CI-binding natural acm mutant strains conferred a CI-binding phenotype, further confirming that native Acm is sufficient for the binding of E. faecium to CI. These results demonstrate that acm, which encodes a potential virulence factor, is functional only in certain infection-derived clinical isolates of E. faecium, and suggest that Acm is the primary adhesin responsible for the

  1. Hypofractionated stereotactic radiotherapy (HFSRT) for who grade I anterior clinoid meningiomas (ACM).

    Science.gov (United States)

    Demiral, Selcuk; Dincoglan, Ferrat; Sager, Omer; Gamsiz, Hakan; Uysal, Bora; Gundem, Esin; Elcim, Yelda; Dirican, Bahar; Beyzadeoglu, Murat

    2016-11-01

    While microsurgical resection plays a central role in the management of ACMs, extensive surgery may be associated with substantial morbidity particularly for tumors in intimate association with critical structures. In this study, we evaluated the use of HFSRT in the management of ACM. A total of 22 patients with ACM were treated using HFSRT. Frameless image guided volumetric modulated arc therapy (VMAT) was performed with a 6 MV linear accelerator (LINAC). The total dose was 25 Gy delivered in five fractions over five consecutive treatment days. Local control (LC) and progression free survival (PFS) rates were calculated using the Kaplan-Meier method. Common Terminology Criteria for Adverse Events, version 4.0 was used in toxicity grading. Out of the total 22 patients, outcomes of 19 patients with at least 36 months of periodic follow-up were assessed. Median patient age was 40 years old (range 24-77 years old). Median follow-up time was 53 months (range 36-63 months). LC and PFS rates were 100 and 89.4 % at 1 and 3 years, respectively. Only two patients (10.5 %) experienced clinical deterioration during the follow-up period. LINAC-based HFSRT offers high rates of LC and PFS for patients with ACMs.

  2. Autolysis of Lactococcus lactis caused by induced overproduction of its major autolysin, AcmA

    NARCIS (Netherlands)

    Buist, Girbe; Karsens, H; Nauta, A; van Sinderen, D; Venema, G; Kok, J

    The optical density of a culture of Lactococcus lactis MG1363 was reduced more than 60% during prolonged stationary phase, Reduction in optical density (autolysis) was almost absent in a culture of an isogenic mutant containing a deletion in the major autolysin gene, acmA. An acmA mutant carrying

  3. Timely activation of budding yeast APCCdh1 involves degradation of its inhibitor, Acm1, by an unconventional proteolytic mechanism.

    Directory of Open Access Journals (Sweden)

    Michael Melesse

    Full Text Available Regulated proteolysis mediated by the ubiquitin proteasome system is a fundamental and essential feature of the eukaryotic cell division cycle. Most proteins with cell cycle-regulated stability are targeted for degradation by one of two related ubiquitin ligases, the Skp1-cullin-F box protein (SCF complex or the anaphase-promoting complex (APC. Here we describe an unconventional cell cycle-regulated proteolytic mechanism that acts on the Acm1 protein, an inhibitor of the APC activator Cdh1 in budding yeast. Although Acm1 can be recognized as a substrate by the Cdc20-activated APC (APCCdc20 in anaphase, APCCdc20 is neither necessary nor sufficient for complete Acm1 degradation at the end of mitosis. An APC-independent, but 26S proteasome-dependent, mechanism is sufficient for complete Acm1 clearance from late mitotic and G1 cells. Surprisingly, this mechanism appears distinct from the canonical ubiquitin targeting pathway, exhibiting several features of ubiquitin-independent proteasomal degradation. For example, Acm1 degradation in G1 requires neither lysine residues in Acm1 nor assembly of polyubiquitin chains. Acm1 was stabilized though by conditional inactivation of the ubiquitin activating enzyme Uba1, implying some requirement for the ubiquitin pathway, either direct or indirect. We identified an amino terminal predicted disordered region in Acm1 that contributes to its proteolysis in G1. Although ubiquitin-independent proteasome substrates have been described, Acm1 appears unique in that its sensitivity to this mechanism is strictly cell cycle-regulated via cyclin-dependent kinase (Cdk phosphorylation. As a result, Acm1 expression is limited to the cell cycle window in which Cdk is active. We provide evidence that failure to eliminate Acm1 impairs activation of APCCdh1 at mitotic exit, justifying its strict regulation by cell cycle-dependent transcription and proteolytic mechanisms. Importantly, our results reveal that strict cell

  4. Timely Activation of Budding Yeast APCCdh1 Involves Degradation of Its Inhibitor, Acm1, by an Unconventional Proteolytic Mechanism

    Science.gov (United States)

    Melesse, Michael; Choi, Eunyoung; Hall, Hana; Walsh, Michael J.; Geer, M. Ariel; Hall, Mark C.

    2014-01-01

    Regulated proteolysis mediated by the ubiquitin proteasome system is a fundamental and essential feature of the eukaryotic cell division cycle. Most proteins with cell cycle-regulated stability are targeted for degradation by one of two related ubiquitin ligases, the Skp1-cullin-F box protein (SCF) complex or the anaphase-promoting complex (APC). Here we describe an unconventional cell cycle-regulated proteolytic mechanism that acts on the Acm1 protein, an inhibitor of the APC activator Cdh1 in budding yeast. Although Acm1 can be recognized as a substrate by the Cdc20-activated APC (APCCdc20) in anaphase, APCCdc20 is neither necessary nor sufficient for complete Acm1 degradation at the end of mitosis. An APC-independent, but 26S proteasome-dependent, mechanism is sufficient for complete Acm1 clearance from late mitotic and G1 cells. Surprisingly, this mechanism appears distinct from the canonical ubiquitin targeting pathway, exhibiting several features of ubiquitin-independent proteasomal degradation. For example, Acm1 degradation in G1 requires neither lysine residues in Acm1 nor assembly of polyubiquitin chains. Acm1 was stabilized though by conditional inactivation of the ubiquitin activating enzyme Uba1, implying some requirement for the ubiquitin pathway, either direct or indirect. We identified an amino terminal predicted disordered region in Acm1 that contributes to its proteolysis in G1. Although ubiquitin-independent proteasome substrates have been described, Acm1 appears unique in that its sensitivity to this mechanism is strictly cell cycle-regulated via cyclin-dependent kinase (Cdk) phosphorylation. As a result, Acm1 expression is limited to the cell cycle window in which Cdk is active. We provide evidence that failure to eliminate Acm1 impairs activation of APCCdh1 at mitotic exit, justifying its strict regulation by cell cycle-dependent transcription and proteolytic mechanisms. Importantly, our results reveal that strict cell-cycle expression profiles

  5. acme: The Amendable Coal-Fire Modeling Exercise. A C++ Class Library for the Numerical Simulation of Coal-Fires

    Science.gov (United States)

    Wuttke, Manfred W.

    2017-04-01

    At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in

  6. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  7. An Unexpected Location of the Arginine Catabolic Mobile Element (ACME) in a USA300-Related MRSA Strain

    DEFF Research Database (Denmark)

    Damkjær Bartels, Mette; Hansen, Lars H.; Boye, Kit

    2011-01-01

    In methicillin resistant Staphylococcus aureus (MRSA), the arginine catabolic mobile element (ACME) was initially described in USA300 (t008-ST8) where it is located downstream of the staphylococcal cassette chromosome mec (SCCmec). A common health-care associated MRSA in Copenhagen, Denmark (t024......-ST8) is clonally related to USA300 and is frequently PCR positive for the ACME specific arcA-gene. This study is the first to describe an ACME element upstream of the SCCmec in MRSA. By traditional SCCmec typing schemes, the SCCmec of t024-ST8 strain M1 carries SCCmec IVa, but full sequencing...... of SCCmec, M1 had two new DR between the orfX gene and the J3 region of the SCCmec. The region between the orfX DR (DR1) and DR2 contained the ccrAB4 genes. An ACME II-like element was located between DR2 and DR3. The entire 26,468 bp sequence between DR1 and DR3 was highly similar to parts of the ACME...

  8. Preliminary proceedings of the 2001 ACM SIGPLAN Haskell workshop

    NARCIS (Netherlands)

    Hinze, R.

    2001-01-01

    This volume contains the preliminary proceedings of the 2001 ACM SIGPLAN Haskell Workshop, which was held on 2nd September 2001 in Firenze, Italy. The final proceedings will published by Elsevier Science as an issue of Electronic Notes in Theoretical Computer Science (Volume 59). The

  9. ACME: A scalable parallel system for extracting frequent patterns from a very long sequence

    KAUST Repository

    Sahli, Majed

    2014-10-02

    Modern applications, including bioinformatics, time series, and web log analysis, require the extraction of frequent patterns, called motifs, from one very long (i.e., several gigabytes) sequence. Existing approaches are either heuristics that are error-prone, or exact (also called combinatorial) methods that are extremely slow, therefore, applicable only to very small sequences (i.e., in the order of megabytes). This paper presents ACME, a combinatorial approach that scales to gigabyte-long sequences and is the first to support supermaximal motifs. ACME is a versatile parallel system that can be deployed on desktop multi-core systems, or on thousands of CPUs in the cloud. However, merely using more compute nodes does not guarantee efficiency, because of the related overheads. To this end, ACME introduces an automatic tuning mechanism that suggests the appropriate number of CPUs to utilize, in order to meet the user constraints in terms of run time, while minimizing the financial cost of cloud resources. Our experiments show that, compared to the state of the art, ACME supports three orders of magnitude longer sequences (e.g., DNA for the entire human genome); handles large alphabets (e.g., English alphabet for Wikipedia); scales out to 16,384 CPUs on a supercomputer; and supports elastic deployment in the cloud.

  10. Proceedings of the ACM SIGIR Workshop ''Searching Spontaneous Conversational Speech''

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Oard, Douglas; Ordelman, Roeland J.F.; Raaijmakers, Stephan

    2007-01-01

    The Proceedings contain the contributions to the workshop on Searching Spontaneous Conversational Speech organized in conjunction with the 30th ACM SIGIR, Amsterdam 2007. The papers reflect some of the emerging focus areas and cross-cutting research topics, together addressing evaluation metrics,

  11. Molecular characteristics of clinical methicillin-resistant Staphylococcus pseudintermedius harboring arginine catabolic mobile element (ACME) from dogs and cats.

    Science.gov (United States)

    Yang, Ching; Wan, Min-Tao; Lauderdale, Tsai-Ling; Yeh, Kuang-Sheng; Chen, Charles; Hsiao, Yun-Hsia; Chou, Chin-Cheng

    2017-06-01

    This study aimed to investigate the presence of arginine catabolic mobile element (ACME) and its associated molecular characteristics in methicillin-resistant Staphylococcus pseudintermedius (MRSP). Among the 72 S. pseudintermedius recovered from various infection sites of dogs and cats, 52 (72.2%) were MRSP. ACME-arcA was detected commonly (69.2%) in these MRSP isolates, and was more frequently detected in those from the skin than from other body sites (P=0.047). There was a wide genetic diversity among the ACME-arcA-positive MRSP isolates, which comprised three SCCmec types (II-III, III and V) and 15 dru types with two predominant clusters (9a and 11a). Most MRSP isolates were multidrug-resistant. Since S. pseudintermedius could serve as a reservoir of ACME, further research on this putative virulence factor is recommended. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. AcmB Is an S-Layer-Associated β-N-Acetylglucosaminidase and Functional Autolysin in Lactobacillus acidophilus NCFM.

    Science.gov (United States)

    Johnson, Brant R; Klaenhammer, Todd R

    2016-09-15

    Autolysins, also known as peptidoglycan hydrolases, are enzymes that hydrolyze specific bonds within bacterial cell wall peptidoglycan during cell division and daughter cell separation. Within the genome of Lactobacillus acidophilus NCFM, there are 11 genes encoding proteins with peptidoglycan hydrolase catalytic domains, 9 of which are predicted to be functional. Notably, 5 of the 9 putative autolysins in L. acidophilus NCFM are S-layer-associated proteins (SLAPs) noncovalently colocalized along with the surface (S)-layer at the cell surface. One of these SLAPs, AcmB, a β-N-acetylglucosaminidase encoded by the gene lba0176 (acmB), was selected for functional analysis. In silico analysis revealed that acmB orthologs are found exclusively in S-layer- forming species of Lactobacillus Chromosomal deletion of acmB resulted in aberrant cell division, autolysis, and autoaggregation. Complementation of acmB in the ΔacmB mutant restored the wild-type phenotype, confirming the role of this SLAP in cell division. The absence of AcmB within the exoproteome had a pleiotropic effect on the extracellular proteins covalently and noncovalently bound to the peptidoglycan, which likely led to the observed decrease in the binding capacity of the ΔacmB strain for mucin and extracellular matrices fibronectin, laminin, and collagen in vitro These data suggest a functional association between the S-layer and the multiple autolysins noncovalently colocalized at the cell surface of L. acidophilus NCFM and other S-layer-producing Lactobacillus species. Lactobacillus acidophilus is one of the most widely used probiotic microbes incorporated in many dairy foods and dietary supplements. This organism produces a surface (S)-layer, which is a self-assembling crystalline array found as the outermost layer of the cell wall. The S-layer, along with colocalized associated proteins, is an important mediator of probiotic activity through intestinal adhesion and modulation of the mucosal immune

  13. A functional collagen adhesin gene, acm, in clinical isolates of Enterococcus faecium correlates with the recent success of this emerging nosocomial pathogen.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Singh, Kavindra V; Okhuysen, Pablo C; Murray, Barbara E

    2008-09-01

    Enterococcus faecium recently evolved from a generally avirulent commensal into a multidrug-resistant health care-associated pathogen causing difficult-to-treat infections, but little is known about the factors responsible for this change. We previously showed that some E. faecium strains express a cell wall-anchored collagen adhesin, Acm. Here we analyzed 90 E. faecium isolates (99% acm(+)) and found that the Acm protein was detected predominantly in clinically derived isolates, while the acm gene was present as a transposon-interrupted pseudogene in 12 of 47 isolates of nonclinical origin. A highly significant association between clinical (versus fecal or food) origin and collagen adherence (P Acm detected by whole-cell enzyme-linked immunosorbent assay and flow cytometry. Thirty-seven of 41 sera from patients with E. faecium infections showed reactivity with recombinant Acm, while only 4 of 30 community and hospitalized patient control group sera reacted (P Acm were present in all 14 E. faecium endocarditis patient sera. Although pulsed-field gel electrophoresis indicated that multiple strains expressed collagen adherence, multilocus sequence typing demonstrated that the majority of collagen-adhering isolates, as well as 16 of 17 endocarditis isolates, are part of the hospital-associated E. faecium genogroup referred to as clonal complex 17 (CC17), which has emerged globally. Taken together, our findings support the hypothesis that Acm has contributed to the emergence of E. faecium and CC17 in nosocomial infections.

  14. Comparison of the chemical behaviour of humanized ACMS VS. Human IGG radiolabeled with 99mTc

    International Nuclear Information System (INIS)

    Rivero Santamaria, Alejandro; Zayas Crespo, Francisco; Mesa Duennas, Niurka; Castillo Vitloch, Adolfo J.

    2003-01-01

    The purpose of this work is to compare the chemical behaviour of humanized AcMs vs. human IgG radiolabeled with 99 mTc. to this end, 3 immunoglobulins were analyzed, the IgG (human), the humanized monoclonal antibody R3 (Acm-R3h) and the humanized monoclonal antibody T1. The results obtained reveal slight differences as regards the behaviour of theses immunoglobulins before the labelling with 99T c, which shows differences in the chemical behaviour of these proteins. Although in theory the modifications that are made to the AcMs in order to humanize them must not affect their chemical behaviour, the obtained data indicate that the conditions for their radiolabelling should not be extrapolated from other proteins; on the contrary, particular procedures should be elaborated for each AcM-h

  15. Listening to professional voices: draft 2 of the ACM code of ethics and professional conduct

    OpenAIRE

    Flick, Catherine; Brinkman, Bo; Gotterbarn, D. W.; Miller, Keith; Vazansky, Kate; Wolf, Marty J.

    2017-01-01

    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. For the first time since 1992, the ACM Code of Ethics and Professional Conduct (the Code) is being updated. The Code Update Task Force in conjunction with the Committee on Professional Ethics is seeking advice from ACM members on the update. We indicated many of the motivations for changing the Code when we shared Draft 1 of Code 2018 with the ...

  16. Searching Spontaneous Conversational Speech. Proceedings of ACM SIGIR Workshop (SSCS2008)

    NARCIS (Netherlands)

    Köhler, J.; Larson, M; de Jong, Franciska M.G.; Ordelman, Roeland J.F.; Kraaij, W.

    2008-01-01

    The second workshop on Searching Spontaneous Conversational Speech (SSCS 2008) was held in Singapore on July 24, 2008 in conjunction with the 31st Annual International ACM SIGIR Conference. The goal of the workshop was to bring the speech community and the information retrieval community together.

  17. Model Diagnostics for the Department of Energy's Accelerated Climate Modeling for Energy (ACME) Project

    Science.gov (United States)

    Smith, B.

    2015-12-01

    In 2014, eight Department of Energy (DOE) national laboratories, four academic institutions, one company, and the National Centre for Atmospheric Research combined forces in a project called Accelerated Climate Modeling for Energy (ACME) with the goal to speed Earth system model development for climate and energy. Over the planned 10-year span, the project will conduct simulations and modeling on DOE's most powerful high-performance computing systems at Oak Ridge, Argonne, and Lawrence Berkeley Leadership Compute Facilities. A key component of the ACME project is the development of an interactive test bed for the advanced Earth system model. Its execution infrastructure will accelerate model development and testing cycles. The ACME Workflow Group is leading the efforts to automate labor-intensive tasks, provide intelligent support for complex tasks and reduce duplication of effort through collaboration support. As part of this new workflow environment, we have created a diagnostic, metric, and intercomparison Python framework, called UVCMetrics, to aid in the testing-to-production execution of the ACME model. The framework exploits similarities among different diagnostics to compactly support diagnosis of new models. It presently focuses on atmosphere and land but is designed to support ocean and sea ice model components as well. This framework is built on top of the existing open-source software framework known as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT). Because of its flexible framework design, scientists and modelers now can generate thousands of possible diagnostic outputs. These diagnostics can compare model runs, compare model vs. observation, or simply verify a model is physically realistic. Additional diagnostics are easily integrated into the framework, and our users have already added several. Diagnostics can be generated, viewed, and manipulated from the UV-CDAT graphical user interface, Python command line scripts and programs

  18. Extra-Margins in ACM's Adjusted NMa ‘Mortgage-Rate-Calculation Method

    NARCIS (Netherlands)

    Dijkstra, M.; Schinkel, M.P.

    2013-01-01

    We analyse the development since 2004 of our concept of extra-margins on Dutch mortgages (Dijkstra & Schinkel, 2012), based on funding cost estimations in ACM (2013), which are an update of those in NMa (2011). Neither costs related to increased mortgage-specific risks, nor the inclusion of Basel

  19. A Prediction of the Damping Properties of Hindered Phenol AO-60/polyacrylate Rubber (AO-60/ACM) Composites through Molecular Dynamics Simulation

    Science.gov (United States)

    Yang, Da-Wei; Zhao, Xiu-Ying; Zhang, Geng; Li, Qiang-Guo; Wu, Si-Zhu

    2016-05-01

    Molecule dynamics (MD) simulation, a molecular-level method, was applied to predict the damping properties of AO-60/polyacrylate rubber (AO-60/ACM) composites before experimental measures were performed. MD simulation results revealed that two types of hydrogen bond, namely, type A (AO-60) -OH•••O=C- (ACM), type B (AO-60) - OH•••O=C- (AO-60) were formed. Then, the AO-60/ACM composites were fabricated and tested to verify the accuracy of the MD simulation through dynamic mechanical thermal analysis (DMTA). DMTA results showed that the introduction of AO-60 could remarkably improve the damping properties of the composites, including the increase of glass transition temperature (Tg) alongside with the loss factor (tan δ), also indicating the AO-60/ACM(98/100) had the best damping performance amongst the composites which verified by the experimental.

  20. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  1. HT 2011 : Proceedings of the 22nd ACM Conference on Hypertext and Hypermedia, Eindhoven, The Netherlands, June 6-9, 2011

    NARCIS (Netherlands)

    De Bra, P.M.E.; Gronbak, K.

    2011-01-01

    Foreword. It is our great pleasure to welcome you to ACM Hypertext 2011, the 22nd ACM Conference on Hypertext and Hypermedia, and to the "Land of the Innovator", the campus of the Eindhoven University of Technology, located in the "city of light" Eindhoven, the Netherlands. Hypertext is an exciting

  2. ADAPTIF CONSERVATION (ACM MODEL IN INCREASING FAMILY SUPPORT AND COMPLIANCE TREATMENT IN PATIENT WITH PULONARY TUBERCULOSIS IN SURABAYA CITY REGION

    Directory of Open Access Journals (Sweden)

    Siti Nur Kholifah

    2017-04-01

    Full Text Available Introduction: Tuberculosis (TB in Indonesia is still health problem and the prevalence rate is high. Discontinuing medication and lack of family support are the causalities. Numbers of strategies to overcome are seemingly not succeeded. Roles and responsibilities of family nursing are crucial to improve participation, motivation of individual, family and community in prevention, including pulmonary tuberculosis. Unfortunately, models of pulmonary tuberculosis currently unavailable. The combination of adaptation and conservation in complementarily improving family support and compliance in medication is introduced in this study. Method: This research intended to analyze Adaptive Conservation Model (ACM in extending family support and treatment compliance. Modeling steps including model analysis, expert validation, field trial, implementation and recommending the output model. Research subject involves 15 families who implement family Assistance and supervision in Medication (ASM and other 15 families with ACM. Result: The study revealed ACM is better than ASM on the case of family support and medication compliances. It supports the role of environment as influential factor on individual health belief, values and decision making. Therefore, it is advised to apply ACM in enhancing family support and compliance of pulmonary TB patients. Discussion: Social and family supports to ACM group obtained by developing interaction through communication. Family interaction necessary to improve family support to pulmonary tuberculosis patients. And social support plays as motivator to maintain compliance on medication

  3. Development and first application of an Aerosol Collection Module (ACM) for quasi online compound specific aerosol measurements

    Science.gov (United States)

    Hohaus, Thorsten; Kiendler-Scharr, Astrid; Trimborn, Dagmar; Jayne, John; Wahner, Andreas; Worsnop, Doug

    2010-05-01

    Atmospheric aerosols influence climate and human health on regional and global scales (IPCC, 2007). In many environments organics are a major fraction of the aerosol influencing its properties. Due to the huge variety of organic compounds present in atmospheric aerosol current measurement techniques are far from providing a full speciation of organic aerosol (Hallquist et al., 2009). The development of new techniques for compound specific measurements with high time resolution is a timely issue in organic aerosol research. Here we present first laboratory characterisations of an aerosol collection module (ACM) which was developed to allow for the sampling and transfer of atmospheric PM1 aerosol. The system consists of an aerodynamic lens system focussing particles on a beam. This beam is directed to a 3.4 mm in diameter surface which is cooled to -30 °C with liquid nitrogen. After collection the aerosol sample can be evaporated from the surface by heating it to up to 270 °C. The sample is transferred through a 60cm long line with a carrier gas. In order to test the ACM for linearity and sensitivity we combined it with a GC-MS system. The tests were performed with octadecane aerosol. The octadecane mass as measured with the ACM-GC-MS was compared versus the mass as calculated from SMPS derived total volume. The data correlate well (R2 0.99, slope of linear fit 1.1) indicating 100 % collection efficiency. From 150 °C to 270 °C no effect of desorption temperature on transfer efficiency could be observed. The ACM-GC-MS system was proven to be linear over the mass range 2-100 ng and has a detection limit of ~ 2 ng. First experiments applying the ACM-GC-MS system were conducted at the Jülich Aerosol Chamber. Secondary organic aerosol (SOA) was formed from ozonolysis of 600 ppbv of b-pinene. The major oxidation product nopinone was detected in the aerosol and could be shown to decrease from 2 % of the total aerosol to 0.5 % of the aerosol over the 48 hours of

  4. Responses of Mixed-Phase Cloud Condensates and Cloud Radiative Effects to Ice Nucleating Particle Concentrations in NCAR CAM5 and DOE ACME Climate Models

    Science.gov (United States)

    Liu, X.; Shi, Y.; Wu, M.; Zhang, K.

    2017-12-01

    Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.

  5. Proceedings of the 16th ACM SIGPLAN international conference on Functional programming

    DEFF Research Database (Denmark)

    Danvy, Olivier

    Welcome to the 16th ACM SIGPLAN International Conference on Functional Programming -- ICFP'11. The picture, on the front cover, is of Mount Fuji, seen from the 20th floor of the National Institute of Informatics (NII). It was taken by Sebastian Fischer in January 2011. In Japanese, the characters...

  6. ARM Airborne Carbon Measurements VI (ARM-ACME VI) Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, Sebastien [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-05-01

    From October 1, 2015 through September 30, 2016, AAF deployed a Cessna 206 aircraft over the Southern Great Plains, collecting observations of trace gas mixing ratios over the ARM/SGP Central Facility. The aircraft payload included two Atmospheric Observing Systems (AOS Inc.) analyzers for continuous measurements of CO2, and a 12-flask sampler for analysis of carbon cycle gases (CO2, CO, CH4, N2O, 13CO2). The aircraft payload also includes solar/infrared radiation measurements. This research (supported by DOE ARM and TES programs) builds upon previous ARM-ACME missions. The goal of these measurements is to improve understanding of: (a) the carbon exchange of the ARM region; (b) how CO2 and associated water and energy fluxes influence radiative forcing, convective processes, and CO2 concentrations over the ARM region, and (c) how greenhouse gases are transported on continental scales.

  7. Microstructure and chemical bonding of DLC films deposited on ACM rubber by PACVD

    NARCIS (Netherlands)

    Martinez-Martinez, D.; Schenkel, M.; Pei, Y.T.; Sánchez-López, J.C.; Hosson, J.Th.M. De

    2011-01-01

    The microstructure and chemical bonding of DLC films prepared by plasma assisted chemical vapor deposition on acrylic rubber (ACM) are studied in this paper. The temperature variation produced by the ion impingement during plasma cleaning and subsequent film deposition was used to modify the film

  8. Theoretical interpretation of the nuclear structure of 88Se within the ACM and the QPM models.

    Science.gov (United States)

    Gratchev, I. N.; Thiamova, G.; Alexa, P.; Simpson, G. S.; Ramdhane, M.

    2018-02-01

    The four-parameter algebraic collective model (ACM) Hamiltonian is used to describe the nuclear structure of 88Se. It is shown that the ACM is capable of providing a reasonable description of the excitation energies and relative positions of the ground-state band and γ band. The most probable interpretation of the nuclear structure of 88Se is that of a transitional nucleus. The Quasiparticle-plus-Phonon Model (QPM) was also applied to describe the nuclear motion in 88Se. Preliminarily calculations show that the collectivity of second excited state {2}2+ is weak and that this state contains a strong two-quasiparticle component.

  9. Pomegranate MR images analysis using ACM and FCM algorithms

    Science.gov (United States)

    Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.

  10. Preventive maintenance plan of the air-conditioning duct using the ACM-sensor

    International Nuclear Information System (INIS)

    Fukuba, Kazushi; Ito, Takanobu; Kojima, Akiko; Tanji, Kazuhiro; Sato, Yuki

    2013-01-01

    Air-conditioning duct is difficult to predict the date to occur of corrosion such as affect the function. Therefore, the current conservation method is mostly corrective maintenance. Therefore, we used the test pieces of six types and ACM-sensor in order to solve the corrosion speed from corrosion environment and relationship of corrosion quantity of test pieces. In addition, was used the duct molded articles various in order to check the corrosion degree of when processed the duct. As a result, we were selected crust body constituting a duct and optimal combination of the flange by solve the corrosion speed of the test pieces various. Thus, it performs preventive disposal before to occur of corrosion such as affect the function by predicting the duct life from corrosion speed, and lead to stability and safe operating by appropriate maintenance of equipment. (author)

  11. ACME: A scalable parallel system for extracting frequent patterns from a very long sequence

    KAUST Repository

    Sahli, Majed; Mansour, Essam; Kalnis, Panos

    2014-01-01

    -long sequences and is the first to support supermaximal motifs. ACME is a versatile parallel system that can be deployed on desktop multi-core systems, or on thousands of CPUs in the cloud. However, merely using more compute nodes does not guarantee efficiency

  12. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    Science.gov (United States)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  13. ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0

    International Nuclear Information System (INIS)

    BROWN, KEVIN H.; SUMMERS, RANDALL M.; GLASS, MICHEAL W.; GULLERUD, ARNE S.; HEINSTEIN, MARTIN W.; JONES, REESE E.

    2001-01-01

    An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library

  14. Effects of activated ACM on expression of signal transducers in cerebral cortical neurons of rats.

    Science.gov (United States)

    Wang, Xiaojing; Li, Zhengli; Zhu, Changgeng; Li, Zhongyu

    2007-06-01

    To explore the roles of astrocytes in the epileptogenesis, astrocytes and neurons were isolated, purified and cultured in vitro from cerebral cortex of rats. The astrocytes were activated by ciliary neurotrophic factor (CNTF) and astrocytic conditioned medium (ACM) was collected to treat neurons for 4, 8 and 12 h. By using Western blot, the expression of calmodulin dependent protein kinase II (CaMK II), inducible nitric oxide synthase (iNOS) and adenylate cyclase (AC) was detected in neurons. The results showed that the expression of CaMK II, iNOS and AC was increased significantly in the neurons treated with ACM from 4 h to 12 h (PACM and such signal pathways as NOS-NO-cGMP, Ca2+/CaM-CaMK II and AC-cAMP-PKA might take part in the signal transduction of epileptogenesis.

  15. Segmentation of solid subregion of high grade gliomas in MRI images based on active contour model (ACM)

    Science.gov (United States)

    Seow, P.; Win, M. T.; Wong, J. H. D.; Abdullah, N. A.; Ramli, N.

    2016-03-01

    Gliomas are tumours arising from the interstitial tissue of the brain which are heterogeneous, infiltrative and possess ill-defined borders. Tumour subregions (e.g. solid enhancing part, edema and necrosis) are often used for tumour characterisation. Tumour demarcation into substructures facilitates glioma staging and provides essential information. Manual segmentation had several drawbacks that include laborious, time consuming, subjected to intra and inter-rater variability and hindered by diversity in the appearance of tumour tissues. In this work, active contour model (ACM) was used to segment the solid enhancing subregion of the tumour. 2D brain image acquisition data using 3T MRI fast spoiled gradient echo sequence in post gadolinium of four histologically proven high-grade glioma patients were obtained. Preprocessing of the images which includes subtraction and skull stripping were performed and then followed by ACM segmentation. The results of the automatic segmentation method were compared against the manual delineation of the tumour by a trainee radiologist. Both results were further validated by an experienced neuroradiologist and a brief quantitative evaluations (pixel area and difference ratio) were performed. Preliminary results of the clinical data showed the potential of ACM model in the application of fast and large scale tumour segmentation in medical imaging.

  16. Segmentation of solid subregion of high grade gliomas in MRI images based on active contour model (ACM)

    International Nuclear Information System (INIS)

    Seow, P; Win, M T; Wong, J H D; Ramli, N; Abdullah, N A

    2016-01-01

    Gliomas are tumours arising from the interstitial tissue of the brain which are heterogeneous, infiltrative and possess ill-defined borders. Tumour subregions (e.g. solid enhancing part, edema and necrosis) are often used for tumour characterisation. Tumour demarcation into substructures facilitates glioma staging and provides essential information. Manual segmentation had several drawbacks that include laborious, time consuming, subjected to intra and inter-rater variability and hindered by diversity in the appearance of tumour tissues. In this work, active contour model (ACM) was used to segment the solid enhancing subregion of the tumour. 2D brain image acquisition data using 3T MRI fast spoiled gradient echo sequence in post gadolinium of four histologically proven high-grade glioma patients were obtained. Preprocessing of the images which includes subtraction and skull stripping were performed and then followed by ACM segmentation. The results of the automatic segmentation method were compared against the manual delineation of the tumour by a trainee radiologist. Both results were further validated by an experienced neuroradiologist and a brief quantitative evaluations (pixel area and difference ratio) were performed. Preliminary results of the clinical data showed the potential of ACM model in the application of fast and large scale tumour segmentation in medical imaging. (paper)

  17. Validation of the Adolescent Concerns Measure (ACM): Evidence from Exploratory and Confirmatory Factor Analysis

    Science.gov (United States)

    Ang, Rebecca P.; Chong, Wan Har; Huan, Vivien S.; Yeo, Lay See

    2007-01-01

    This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer…

  18. Influence of a transport current on the local magnetic field distribution in sintered YBa2Cu3Ox

    International Nuclear Information System (INIS)

    Zimmermann, P.; Keller, H.; Kuendig, W.; Puempin, B.; Savic, I.M.; Schneider, J.W.; Simmler, H.; Kaldis, E.; Rusiecki, S.

    1991-01-01

    The influence of a transport current on the magnetic flux-line distribution in sintered YBCO was studied by means of μSR. Pronounced differences between zero-field-cooled (ZFC) and field-cooled (FC) signals and irreversible behavior were observed. In the ZFC case even a small transport current (10 A/cm 2 ) tends to order irreversibly the inhomogeneous flux-line distribution considerably, suggesting a broad distribution of pinning barriers. However, for a FC sample no noticeable change in the flux distribution in the presence of a transport current (up to 40 A/cm 2 ) was detected, indicating that the FC state represents a stable flux-line configuration. (orig.)

  19. TWO NOVEL ACM (ACTIVE CONTOUR MODEL) METHODS FOR INTRAVASCULAR ULTRASOUND IMAGE SEGMENTATION

    International Nuclear Information System (INIS)

    Chen, Chi Hau; Potdat, Labhesh; Chittineni, Rakesh

    2010-01-01

    One of the attractive image segmentation methods is the Active Contour Model (ACM) which has been widely used in medical imaging as it always produces sub-regions with continuous boundaries. Intravascular ultrasound (IVUS) is a catheter based medical imaging technique which is used for quantitative assessment of atherosclerotic disease. Two methods of ACM realizations are presented in this paper. The gradient descent flow based on minimizing energy functional can be used for segmentation of IVUS images. However this local operation alone may not be adequate to work with the complex IVUS images. The first method presented consists of basically combining the local geodesic active contours and global region-based active contours. The advantage of combining the local and global operations is to allow curves deforming under the energy to find only significant local minima and delineate object borders despite noise, poor edge information and heterogeneous intensity profiles. Results for this algorithm are compared to standard techniques to demonstrate the method's robustness and accuracy. In the second method, the energy function is appropriately modified and minimized using a Hopfield neural network. Proper modifications in the definition of the bias of the neurons have been introduced to incorporate image characteristics. The method overcomes distortions in the expected image pattern, such as due to the presence of calcium, and employs a specialized structure of the neural network and boundary correction schemes which are based on a priori knowledge about the vessel geometry. The presented method is very fast and has been evaluated using sequences of IVUS frames.

  20. Enabling Chemistry of Gases and Aerosols for Assessment of Short-Lived Climate Forcers: Improving Solar Radiation Modeling in the DOE-ACME and CESM models

    Energy Technology Data Exchange (ETDEWEB)

    Prather, Michael [Univ. of California, Irvine, CA (United States)

    2018-01-12

    This proposal seeks to maintain the DOE-ACME (offshoot of CESM) as one of the leading CCMs to evaluate near-term climate mitigation. It will implement, test, and optimize the new UCI photolysis codes within CESM CAM5 and new CAM versions in ACME. Fast-J is a high-order-accuracy (8 stream) code for calculating solar scattering and absorption in a single column atmosphere containing clouds, aerosols, and gases that was developed at UCI and implemented in CAM5 under the previous BER/SciDAC grant.

  1. Construction of improved temperature-sensitive and mobilizable vectors and their use for constructing mutations in the adhesin-encoding acm gene of poorly transformable clinical Enterococcus faecium strains.

    Science.gov (United States)

    Nallapareddy, Sreedhar R; Singh, Kavindra V; Murray, Barbara E

    2006-01-01

    Inactivation by allelic exchange in clinical isolates of the emerging nosocomial pathogen Enterococcus faecium has been hindered by lack of efficient tools, and, in this study, transformation of clinical isolates was found to be particularly problematic. For this reason, a vector for allelic replacement (pTEX5500ts) was constructed that includes (i) the pWV01-based gram-positive repAts replication region, which is known to confer a high degree of temperature intolerance, (ii) Escherichia coli oriR from pUC18, (iii) two extended multiple-cloning sites located upstream and downstream of one of the marker genes for efficient cloning of flanking regions for double-crossover mutagenesis, (iv) transcriptional terminator sites to terminate undesired readthrough, and (v) a synthetic extended promoter region containing the cat gene for allelic exchange and a high-level gentamicin resistance gene, aph(2'')-Id, to distinguish double-crossover recombination, both of which are functional in gram-positive and gram-negative backgrounds. To demonstrate the functionality of this vector, the vector was used to construct an acm (encoding an adhesin to collagen from E. faecium) deletion mutant of a poorly transformable multidrug-resistant E. faecium endocarditis isolate, TX0082. The acm-deleted strain, TX6051 (TX0082Deltaacm), was shown to lack Acm on its surface, which resulted in the abolishment of the collagen adherence phenotype observed in TX0082. A mobilizable derivative (pTEX5501ts) that contains oriT of Tn916 to facilitate conjugative transfer from the transformable E. faecalis strain JH2Sm::Tn916 to E. faecium was also constructed. Using this vector, the acm gene of a nonelectroporable E. faecium wound isolate was successfully interrupted. Thus, pTEX5500ts and its mobilizable derivative demonstrated their roles as important tools by helping to create the first reported allelic replacement in E. faecium; the constructed this acm deletion mutant will be useful for assessing the

  2. Evidence for heterogeneity of astrocyte de-differentiation in vitro: astrocytes transform into intermediate precursor cells following induction of ACM from scratch-insulted astrocytes.

    Science.gov (United States)

    Yang, Hao; Qian, Xin-Hong; Cong, Rui; Li, Jing-wen; Yao, Qin; Jiao, Xi-Ying; Ju, Gong; You, Si-Wei

    2010-04-01

    Our previous study definitely demonstrated that the mature astrocytes could undergo a de-differentiation process and further transform into pluripotential neural stem cells (NSCs), which might well arise from the effect of diffusible factors released from scratch-insulted astrocytes. However, these neurospheres passaged from one neurosphere-derived from de-differentiated astrocytes possessed a completely distinct characteristic in the differentiation behavior, namely heterogeneity of differentiation. The heterogeneity in cell differentiation has become a crucial but elusive issue. In this study, we show that purified astrocytes could de-differentiate into intermediate precursor cells (IPCs) with addition of scratch-insulted astrocyte-conditioned medium (ACM) to the culture, which can express NG2 and A2B5, the IPCs markers. Apart from the number of NG2(+) and A2B5(+) cells, the percentage of proliferative cells as labeled with BrdU progressively increased with prolonged culture period ranging from 1 to 10 days. Meanwhile, the protein level of A2B5 in cells also increased significantly. These results revealed that not all astrocytes could de-differentiate fully into NSCs directly when induced by ACM, rather they generated intermediate or more restricted precursor cells that might undergo progressive de-differentiation to generate NSCs.

  3. 80 A/cm2 electron beams from metal targets irradiated by KrCl and XeCl excimer lasers

    Science.gov (United States)

    Beloglazov, A.; Martino, M.; Nassisi, V.

    1996-05-01

    Due to the growing demand for high-current and long-duration electron-beam devices, laser electron sources were investigated in our laboratory. Experiments on electron-beam generation and propagation from aluminium and copper targets illuminated by XeCl (308 nm) and KrCl (222 nm) excimer lasers, were carried out under plasma ignition due to laser irradiation. This plasma supplied a spontaneous accelerating electric field of about 370 kV/m without an external accelerating voltage. By applying the modified one-dimensional Poisson equation, we computed the expected current and we also estimated the plasma concentration during the accelerating process. At 40 kV of accelerating voltage, an output current pulse of about 80 A/cm2 was detected from an Al target irradiated by the shorter wavelength laser.

  4. Proceeding of the ACM/IEEE-CS Joint Conference on Digital Libraries (1st, Roanoke, Virginia, June 24-28, 2001).

    Science.gov (United States)

    Association for Computing Machinery, New York, NY.

    Papers in this Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, Virginia, June 24-28, 2001) discuss: automatic genre analysis; text categorization; automated name authority control; automatic event generation; linked active content; designing e-books for legal research; metadata harvesting; mapping the…

  5. Experimental determination of the partitioning coefficient and volatility of important BVOC oxidation products using the Aerosol Collection Module (ACM) coupled to a PTR-ToF-MS

    Science.gov (United States)

    Gkatzelis, G.; Hohaus, T.; Tillmann, R.; Schmitt, S. H.; Yu, Z.; Schlag, P.; Wegener, R.; Kaminski, M.; Kiendler-Scharr, A.

    2015-12-01

    Atmospheric aerosol can alter the Earth's radiative budget and global climate but can also affect human health. A dominant contributor to the submicrometer particulate matter (PM) is organic aerosol (OA). OA can be either directly emitted through e.g. combustion processes (primary OA) or formed through the oxidation of organic gases (secondary organic aerosol, SOA). A detailed understanding of SOA formation is of importance as it constitutes a major contribution to the total OA. The partitioning between the gas and particle phase as well as the volatility of individual components of SOA is yet poorly understood adding uncertainties and thus complicating climate modelling. In this work, a new experimental methodology was used for compound-specific analysis of organic aerosol. The Aerosol Collection Module (ACM) is a newly developed instrument that deploys an aerodynamic lens to separate the gas and particle phase of an aerosol. The particle phase is directed to a cooled sampling surface. After collection particles are thermally desorbed and transferred to a detector for further analysis. In the present work, the ACM was coupled to a Proton Transfer Reaction-Time of Flight-Mass Spectrometer (PTR-ToF-MS) to detect and quantify organic compounds partitioning between the gas and particle phase. This experimental approach was used in a set of experiments at the atmosphere simulation chamber SAPHIR to investigate SOA formation. Ozone oxidation with subsequent photochemical aging of β-pinene, limonene and real plant emissions from Pinus sylvestris (Scots pine) were studied. Simultaneous measurement of the gas and particle phase using the ACM-PTR-ToF-MS allows to report partitioning coefficients of important BVOC oxidation products. Additionally, volatility trends and changes of the SOA with photochemical aging are investigated and compared for all systems studied.

  6. Fabrication of 93.7 m long PLD-EuBCO + BaHfO_3 coated conductors with 103 A/cm W at 77 K under 3 T

    International Nuclear Information System (INIS)

    Yoshida, T.; Ibi, A.; Takahashi, T.; Yoshizumi, M.; Izumi, T.; Shiohara, Y.

    2015-01-01

    Highlights: • A 93.7 m long EuBCO + BHO CC with 103 A/cm W at 77 K under 3 T was obtained. • The 93.7 m long CC showed high I_c values and high n-values with high uniformity. • The average I_c value at 77 K under 3 T was estimated by that at 77 K under 0.3 T. - Abstract: Introduction of artificial pinning centers such as BaHfO_3 (BHO), BaZrO_3 (BZO) and BaSnO_3 (BSO) into REBa_2Cu_3O_7_−_δ (REBCO) coated conductor (CC) layers could improve the in-field critical currents (I_c) in wide ranges of temperatures and magnetic fields. In particular, a combination of EuBCO + BHO has been found to be effective for attaining high in-field I_c performance by means of IBAD/PLD process in short length samples. In this work, we have successfully fabricated a 93.7 m long EuBCO + BHO CC with 103 A/cm W at 77 K under a magnet field (B) of 3 T applied perpendicular to the CC (B//c). The 93.7 m long EuBCO + BHO CC had high uniformity of I_c values and n-values without any trend of fluctuations, independent of the external field up to 0.3 T. I_c–B–applied angle (θ) profiles of the 93.7 m long EuBCO + BHO CC sample showed the high in-field I_c values in all directions of applied magnetic fields especially B//c (at θ ∼ 180°, I_c = 157 A/cm W) at 77 K under 3 T. The profiles were about the same as those in a short length sample.

  7. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    Science.gov (United States)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  8. Analysis of Residual Nuclide in a ACM and ACCT of 100-MeV proton beamline By measurement X-ray Spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jeong-Min; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Gyeongju (Korea, Republic of)

    2015-10-15

    The proton beam is provides to users as various energy range from 20 MeV to 100 MeV. After protons generated from the ion source are accelerated to 100 MeV and irradiated to target through bending magnet and AC magnet. At this time, relatively high dose X-ray is emitted due to collision of proton and components of beamline. The generated X-ray is remaining after the accelerator is turned off and analyzing residual nuclides through the measurement of X-ray spectrum. Then identify the components that are the primary cause of residual nuclides are detected form the AC magnet(ACM) and associated components (ACCT). Analysis of the X-ray spectrum generated form the AC magnet(ACM) and AC current transformer(ACCT) of 100 MeV beamline according to the proton beam irradiation, most of the residual nuclides are identified it can be seen that emission in the stainless steel by beam loss.

  9. Safe Distribution of Declarative Processes

    DEFF Research Database (Denmark)

    Hildebrandt, Thomas; Mukkamala, Raghava Rao; Slaats, Tijs

    2011-01-01

    of projections that covers a DCR Graph that the network of synchronously communicating DCR Graphs given by the projections is bisimilar to the original global process graph. We exemplify the distribution technique on a process identified in a case study of an cross-organizational case management system carried...... process model generalizing labelled prime event structures to a systems model able to finitely represent ω-regular languages. An operational semantics given as a transition semantics between markings of the graph allows DCR Graphs to be conveniently used as both specification and execution model....... The technique for distribution is based on a new general notion of projection of DCR Graphs relative to a subset of labels and events identifying the set of external events that must be communicated from the other processes in the network in order for the distribution to be safe.We prove that for any vector...

  10. Experiments to Distribute Map Generalization Processes

    Science.gov (United States)

    Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas

    2018-05-01

    Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.

  11. On Distributed Port-Hamiltonian Process Systems

    NARCIS (Netherlands)

    Lopezlena, Ricardo; Scherpen, Jacquelien M.A.

    2004-01-01

    In this paper we use the term distributed port-Hamiltonian Process Systems (DPHPS) to refer to the result of merging the theory of distributed Port-Hamiltonian systems (DPHS) with the theory of process systems (PS). Such concept is useful for combining the systematic interconnection of PHS with the

  12. Climate Science's Globally Distributed Infrastructure

    Science.gov (United States)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  13. Selective desulfurization of cysteine in the presence of Cys(Acm) in polypeptides obtained by native chemical ligation.

    Science.gov (United States)

    Pentelute, Brad L; Kent, Stephen B H

    2007-02-15

    Increased versatility for the synthesis of proteins and peptides by native chemical ligation requires the ability to ligate at positions other than Cys. Here, we report that Raney nickel can be used under standard conditions for the selective desulfurization of Cys in the presence of Cys(Acm). This simple and practical tactic enables the more common Xaa-Ala junctions to be used as ligation sites for the chemical synthesis of Cys-containing peptides and proteins. [reaction: see text].

  14. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...... result is used to study functional summaries for log Gaussian Cox processes....

  15. Understanding flexible and distributed software development processes

    OpenAIRE

    Agerfalk, Par J.; Fitzgerald, Brian

    2006-01-01

    peer-reviewed The minitrack on Flexible and Distributed Software Development Processes addresses two important and partially intertwined current themes in software development: process flexibility and globally distributed software development

  16. Web-Based Distributed XML Query Processing

    NARCIS (Netherlands)

    Smiljanic, M.; Feng, L.; Jonker, Willem; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Web-based distributed XML query processing has gained in importance in recent years due to the widespread popularity of XML on the Web. Unlike centralized and tightly coupled distributed systems, Web-based distributed database systems are highly unpredictable and uncontrollable, with a rather

  17. Design of ET(B) receptor agonists: NMR spectroscopic and conformational studies of ET7-21[Leu7, Aib11, Cys(Acm)15].

    Science.gov (United States)

    Hewage, Chandralal M; Jiang, Lu; Parkinson, John A; Ramage, Robert; Sadler, Ian H

    2002-03-01

    In a previous report we have shown that the endothelin-B receptor-selective linear endothelin peptide, ET-1[Cys (Acm)1,15, Ala3, Leu7, Aib11], folds into an alpha-helical conformation in a methanol-d3/water co-solvent [Hewage et al. (1998) FEBS Lett., 425, 234-238]. To study the requirements for the structure-activity relationships, truncated analogues of this peptide were subjected to further studies. Here we report the solution conformation of ET7-21[Leu7, Aib11, Cys(Acm)15], in a methanol-d3/water co-solvent at pH 3.6, by NMR spectroscopic and molecular modelling studies. Further truncation of this short peptide results in it displaying poor agonist activity. The modelled structure shows that the peptide folds into an alpha-helical conformation between residues Lys9-His16, whereas the C-terminus prefers no fixed conformation. This truncated linear endothelin analogue is pivotal for designing endothelin-B receptor agonists.

  18. Data processing system for real-time control

    International Nuclear Information System (INIS)

    Oasa, K.; Mochizuki, O.; Toyokawa, R.; Yahiro, K.

    1983-01-01

    Real-time control, for large Tokamak JT-60, requires various data processings between diagnostic devices to control system. These processings require to high speed performance so that it aims at giving information necessary for feedback control during discharges. Then, the architecture of this system has hierachical structure of processors. These processors are connected each other by the CAMAC modules and the optical communication network, which is the 5 M bytes/second CAMAC serial highway. This system has two kinds of intelligences for this purpose. One is ACM-PU pairs in some torus hall crates which has a microcomputerized auxiliary controller and a preprocessing unit. Other is real-time processor which has a minicomputer and preprocessing unit. Most of the real-time processing, for example Abel inversion are characteristic to the diagnostic devices. Such a processing is carried out by an ACM-PU pair in the crate dedicated to the diagnostic device. Some processings, however, are also necessary which compute secondary parameters as functions of primary parameters. A typical example is Zeff, which is a function of Te, Ne and bremsstrahluny intensity. The real-time processor is equipped for such secondary processings and transfer the results. Preprocessing unit -PU- attached to ACM and real-time processor contains a signal processor, which executes in parallel such function as move, add and multiply during one micro-instruction cycle of 200 nsec. According to the progress of the experiment, more high speed processing are required, so the authors developed the PU-X module that contains multi signal processors. After a shot, inter-shot-processor which consists of general-purpose computers, gathers data into the database, then analyze them, and improve these processes to more effective

  19. Fabrication of long REBCO coated conductors by PLD process in China

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yijie, E-mail: yjli@sjtu.edu.cn [Key Laboratory of Artificial Structure and Quantum Control, Ministry of Education, Department of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 20040 (China); Shanghai Superconductor Technology Corporation, Ltd, 28 Jiang Chuan Road, Shanghai 200240 (China); Liu, Linfei; Wu, Xiang [Key Laboratory of Artificial Structure and Quantum Control, Ministry of Education, Department of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 20040 (China)

    2015-11-15

    Highlights: • SJTU fabricated 100 m long class CC tapes with over 300 A/cm on RABiTS tapes in 2011. • 100 m long CC tapes with 500 A/cm have been routinely fabricated on IBAD-MgO tapes. • The process optimization for kilometer long coated conductor tapes is underway. - Abstract: In China, the First National Key Project on CC Program started in 2009, which was focused on developing hundred meter long class CC tapes based on PLD/RABiTS processes. In this project, SJTU mainly worked on all of functional layer deposition process development. Northwest Institute for Non-ferrous Metal Research worked on RABiTS tape fabrication. At the end of the project in 2011, SJTU successfully fabricated hundred meter long CC tapes with over 300 A/cm (at 77 K, self field) on RABiTS tapes. To develop high performance CC tapes by PLD/IBAD-MgO processes, a pilot CC fabrication line was set up at Shanghai Superconductor Technology Corporation, Ltd. in 2013. High quality long REBCO coated conductors have been successfully fabricated on flexible polycrystalline metal tapes by PLD plus magnetron sputter and IBAD processes. Under optimized conditions, the IBAD-MgO layers showed pure (0 0 1) orientation and excellent in-plane texture. The in-plane phi-scan rocking curve is 4–6 degrees. AFM observation showed MgO layer had very smooth surface. The RMS is less 1 nm. On the textured MgO layer, sputter deposited single cerium oxide cap-layer showed pure (0 0 1) orientation and excellent in-plane texture of 4–6 degree. Reel-to-reel PLD process with high deposition rate was already scaled up to 100 m/h tape speed. Hundred meters long coated conductor tapes with over 500 A/cm performance have been routinely fabricated. And now, the process optimization for kilometer long coated conductor tapes is underway.

  20. Parallel and distributed processing: applications to power systems

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Felix; Murphy, Liam [California Univ., Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1994-12-31

    Applications of parallel and distributed processing to power systems problems are still in the early stages. Rapid progress in computing and communications promises a revolutionary increase in the capacity of distributed processing systems. In this paper, the state-of-the art in distributed processing technology and applications is reviewed and future trends are discussed. (author) 14 refs.,1 tab.

  1. Multivariate semi-logistic distribution and processes | Umar | Journal ...

    African Journals Online (AJOL)

    Multivariate semi-logistic distribution is introduced and studied. Some characterizations properties of multivariate semi-logistic distribution are presented. First order autoregressive minification processes and its generalization to kth order autoregressive minification processes with multivariate semi-logistic distribution as ...

  2. AVIRIS and TIMS data processing and distribution at the land processes distributed active archive center

    Science.gov (United States)

    Mah, G. R.; Myers, J.

    1993-01-01

    The U.S. Government has initiated the Global Change Research program, a systematic study of the Earth as a complete system. NASA's contribution of the Global Change Research Program is the Earth Observing System (EOS), a series of orbital sensor platforms and an associated data processing and distribution system. The EOS Data and Information System (EOSDIS) is the archiving, production, and distribution system for data collected by the EOS space segment and uses a multilayer architecture for processing, archiving, and distributing EOS data. The first layer consists of the spacecraft ground stations and processing facilities that receive the raw data from the orbiting platforms and then separate the data by individual sensors. The second layer consists of Distributed Active Archive Centers (DAAC) that process, distribute, and archive the sensor data. The third layer consists of a user science processing network. The EOSDIS is being developed in a phased implementation. The initial phase, Version 0, is a prototype of the operational system. Version 0 activities are based upon existing systems and are designed to provide an EOSDIS-like capability for information management and distribution. An important science support task is the creation of simulated data sets for EOS instruments from precursor aircraft or satellite data. The Land Processes DAAC, at the EROS Data Center (EDC), is responsible for archiving and processing EOS precursor data from airborne instruments such as the Thermal Infrared Multispectral Scanner (TIMS), the Thematic Mapper Simulator (TMS), and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS). AVIRIS, TIMS, and TMS are flown by the NASA-Ames Research Center ARC) on an ER-2. The ER-2 flies at 65000 feet and can carry up to three sensors simultaneously. Most jointly collected data sets are somewhat boresighted and roughly registered. The instrument data are being used to construct data sets that simulate the spectral and spatial

  3. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This paper reviews useful results related to Palm distributions of spatial point processes and provides a new result regarding the characterization of Palm distributions for the class of log Gaussian Cox processes. This result is used to study functional summary statistics for a log Gaussian Cox...

  4. Parallel and Distributed Data Processing Using Autonomous ...

    African Journals Online (AJOL)

    Looking at the distributed nature of these networks, data is processed by remote login or Remote Procedure Calls (RPC), this causes congestion in the network bandwidth. This paper proposes a framework where software agents are assigned duties to be processing the distributed data concurrently and assembling the ...

  5. Process evaluation distributed system

    Science.gov (United States)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  6. CLIC-ACM: generic modular rad-hard data acquisition system based on CERN GBT versatile link

    International Nuclear Information System (INIS)

    Bielawski, B.; Locci, F.; Magnoni, S.

    2015-01-01

    CLIC is a world-wide collaboration to study the next ''terascale'' lepton collider, relying upon a very innovative concept of two-beam-acceleration. This accelerator, currently under study, will be composed of the subsequence of 21000 two-beam-modules. Each module requires more than 300 analogue and digital signals which need to be acquired and controlled in a synchronous way. CLIC-ACM (Acquisition and Control Module) is the 'generic' control and acquisition module developed to accommodate the controls of all these signals for various sub-systems and related specification in term of data bandwidth, triggering and timing synchronization. This paper describes the system architecture with respect to its radiation-tolerance, power consumption and scalability

  7. Design of distributed systems of hydrolithosphere processes management. A synthesis of distributed management systems

    Science.gov (United States)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.

  8. Development and Field Test of a Real-Time Database in the Korean Smart Distribution Management System

    Directory of Open Access Journals (Sweden)

    Sang-Yun Yun

    2014-03-01

    Full Text Available Recently, a distribution management system (DMS that can conduct periodical system analysis and control by mounting various applications programs has been actively developed. In this paper, we summarize the development and demonstration of a database structure that can perform real-time system analysis and control of the Korean smart distribution management system (KSDMS. The developed database structure consists of a common information model (CIM-based off-line database (DB, a physical DB (PDB for DB establishment of the operating server, a real-time DB (RTDB for real-time server operation and remote terminal unit data interconnection, and an application common model (ACM DB for running application programs. The ACM DB for real-time system analysis and control of the application programs was developed by using a parallel table structure and a link list model, thereby providing fast input and output as well as high execution speed of application programs. Furthermore, the ACM DB was configured with hierarchical and non-hierarchical data models to reflect the system models that increase the DB size and operation speed through the reduction of the system, of which elements were unnecessary for analysis and control. The proposed database model was implemented and tested at the Gochaing and Jeju offices using a real system. Through data measurement of the remote terminal units, and through the operation and control of the application programs using the measurement, the performance, speed, and integrity of the proposed database model were validated, thereby demonstrating that this model can be applied to real systems.

  9. ACMS-Data

    Data.gov (United States)

    Department of Homeland Security — The Records of CBP training activities in the academies and in-service field training. This data is for processing by COTS Application Acadis Readiness Suite and is...

  10. Modelling aspects of distributed processing in telecommunication networks

    NARCIS (Netherlands)

    Tomasgard, A; Audestad, JA; Dye, S; Stougie, L; van der Vlerk, MH; Wallace, SW

    1998-01-01

    The purpose of this paper is to formally describe new optimization models for telecommunication networks with distributed processing. Modem distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in

  11. A tutorial on Palm distributions for spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...

  12. Distributed learning process: principles of design and implementation

    Directory of Open Access Journals (Sweden)

    G. N. Boychenko

    2016-01-01

    Full Text Available At the present stage, broad information and communication technologies (ICT usage in educational practices is one of the leading trends of global education system development. This trend has led to the instructional interaction models transformation. Scientists have developed the theory of distributed cognition (Salomon, G., Hutchins, E., and distributed education and training (Fiore, S. M., Salas, E., Oblinger, D. G., Barone, C. A., Hawkins, B. L.. Educational process is based on two separated in time and space sub-processes of learning and teaching which are aimed at the organization of fl exible interactions between learners, teachers and educational content located in different non-centralized places.The purpose of this design research is to fi nd a solution for the problem of formalizing distributed learning process design and realization that is signifi cant in instructional design. The solution to this problem should take into account specifi cs of distributed interactions between team members, which becomes collective subject of distributed cognition in distributed learning process. This makes it necessary to design roles and functions of the individual team members performing distributed educational activities. Personal educational objectives should be determined by decomposition of team objectives into functional roles of its members with considering personal and learning needs and interests of students.Theoretical and empirical methods used in the study: theoretical analysis of philosophical, psychological, and pedagogical literature on the issue, analysis of international standards in the e-learning domain; exploration on practical usage of distributed learning in academic and corporate sectors; generalization, abstraction, cognitive modelling, ontology engineering methods.Result of the research is methodology for design and implementation of distributed learning process based on the competency approach. Methodology proposed by

  13. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  14. SUPERPOSITION OF STOCHASTIC PROCESSES AND THE RESULTING PARTICLE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Schwadron, N. A.; Dayeh, M. A.; Desai, M.; Fahr, H.; Jokipii, J. R.; Lee, M. A.

    2010-01-01

    Many observations of suprathermal and energetic particles in the solar wind and the inner heliosheath show that distribution functions scale approximately with the inverse of particle speed (v) to the fifth power. Although there are exceptions to this behavior, there is a growing need to understand why this type of distribution function appears so frequently. This paper develops the concept that a superposition of exponential and Gaussian distributions with different characteristic speeds and temperatures show power-law tails. The particular type of distribution function, f ∝ v -5 , appears in a number of different ways: (1) a series of Poisson-like processes where entropy is maximized with the rates of individual processes inversely proportional to the characteristic exponential speed, (2) a series of Gaussian distributions where the entropy is maximized with the rates of individual processes inversely proportional to temperature and the density of individual Gaussian distributions proportional to temperature, and (3) a series of different diffusively accelerated energetic particle spectra with individual spectra derived from observations (1997-2002) of a multiplicity of different shocks. Thus, we develop a proof-of-concept for the superposition of stochastic processes that give rise to power-law distribution functions.

  15. Improving simulated long-term responses of vegetation to temperature and precipitation extremes using the ACME land model

    Science.gov (United States)

    Ricciuto, D. M.; Warren, J.; Guha, A.

    2017-12-01

    While carbon and energy fluxes in current Earth system models generally have reasonable instantaneous responses to extreme temperature and precipitation events, they often do not adequately represent the long-term impacts of these events. For example, simulated net primary productivity (NPP) may decrease during an extreme heat wave or drought, but may recover rapidly to pre-event levels following the conclusion of the extreme event. However, field measurements indicate that long-lasting damage to leaves and other plant components often occur, potentially affecting the carbon and energy balance for months after the extreme event. The duration and frequency of such extreme conditions is likely to shift in the future, and therefore it is critical for Earth system models to better represent these processes for more accurate predictions of future vegetation productivity and land-atmosphere feedbacks. Here we modify the structure of the Accelerated Climate Model for Energy (ACME) land surface model to represent long-term impacts and test the improved model against observations from experiments that applied extreme conditions in growth chambers. Additionally, we test the model against eddy covariance measurements that followed extreme conditions at selected locations in North America, and against satellite-measured vegetation indices following regional extreme events.

  16. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  17. Limiting conditional distributions for birth-death processes

    NARCIS (Netherlands)

    Kijima, M.; Nair, M.G.; Pollett, P.K.; van Doorn, Erik A.

    1997-01-01

    In a recent paper one of us identified all of the quasi-stationary distributions for a non-explosive, evanescent birth-death process for which absorption is certain, and established conditions for the existence of the corresponding limiting conditional distributions. Our purpose is to extend these

  18. Distributed Aerodynamic Sensing and Processing Toolbox

    Science.gov (United States)

    Brenner, Martin; Jutte, Christine; Mangalam, Arun

    2011-01-01

    A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.

  19. Immature osteoblastic MG63 cells possess two calcitonin gene-related peptide receptor subtypes that respond differently to [Cys(Acm)(2,7)] calcitonin gene-related peptide and CGRP(8-37).

    Science.gov (United States)

    Kawase, Tomoyuki; Okuda, Kazuhiro; Burns, Douglas M

    2005-10-01

    Calcitonin gene-related peptide (CGRP) is clearly an anabolic factor in skeletal tissue, but the distribution of CGRP receptor (CGRPR) subtypes in osteoblastic cells is poorly understood. We previously demonstrated that the CGRPR expressed in osteoblastic MG63 cells does not match exactly the known characteristics of the classic subtype 1 receptor (CGRPR1). The aim of the present study was to further characterize the MG63 CGRPR using a selective agonist of the putative CGRPR2, [Cys(Acm)(2,7)]CGRP, and a relatively specific antagonist of CGRPR1, CGRP(8-37). [Cys(Acm)(2,7)]CGRP acted as a significant agonist only upon ERK dephosphorylation, whereas this analog effectively antagonized CGRP-induced cAMP production and phosphorylation of cAMP response element-binding protein (CREB) and p38 MAPK. Although it had no agonistic action when used alone, CGRP(8-37) potently blocked CGRP actions on cAMP, CREB, and p38 MAPK but had less of an effect on ERK. Schild plot analysis of the latter data revealed that the apparent pA2 value for ERK is clearly distinguishable from those of the other three plots as judged using the 95% confidence intervals. Additional assays using 3-isobutyl-1-methylxanthine or the PKA inhibitor N-(2-[p-bromocinnamylamino]ethyl)-5-isoquinolinesulfonamide hydrochloride (H-89) indicated that the cAMP-dependent pathway was predominantly responsible for CREB phosphorylation, partially involved in ERK dephosphorylation, and not involved in p38 MAPK phosphorylation. Considering previous data from Scatchard analysis of [125I]CGRP binding in connection with these results, these findings suggest that MG63 cells possess two functionally distinct CGRPR subtypes that show almost identical affinity for CGRP but different sensitivity to CGRP analogs: one is best characterized as a variation of CGRPR1, and the second may be a novel variant of CGRPR2.

  20. Characteristics of the Audit Processes for Distributed Informatics Systems

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2009-01-01

    Full Text Available The paper contains issues regarding: main characteristics and examples of the distributed informatics systems and main difference categories among them, concepts, principles, techniques and fields for auditing the distributed informatics systems, concepts and classes of the standard term, characteristics of this one, examples of standards, guidelines, procedures and controls for auditing the distributed informatics systems. The distributed informatics systems are characterized by the following issues: development process, resources, implemented functionalities, architectures, system classes, particularities. The audit framework has two sides: the audit process and auditors. The audit process must be led in accordance with the standard specifications in the IT&C field. The auditors must meet the ethical principles and they must have a high-level of professional skills and competence in IT&C field.

  1. Do Adaptive Comanagement Processes Lead to Adaptive Comanagement Outcomes? A Multicase Study of Long-term Outcomes Associated with the National Riparian Service Team's Place-based Riparian Assistance

    Directory of Open Access Journals (Sweden)

    Jill A. Smedstad

    2013-12-01

    Full Text Available Adaptive comanagement (ACM is a novel approach to environmental governance that combines the dynamic learning features of adaptive management with the linking and network features of collaborative management. There is growing interest in the potential for ACM to resolve conflicts around natural resource management and contribute to greater social and ecological resilience, but little is known about how to catalyze long lasting ACM arrangements. We contribute to knowledge on this topic by evaluating the National Riparian Service Team's (NRST efforts to catalyze ACM of public lands riparian areas in seven cases in the western U.S. We found that the NRST's approach offers a relatively novel model for integrating joint fact-finding, multiple forms of knowledge, and collaborative problem solving to improve public lands riparian grazing management. With this approach, learning and dialogue often helped facilitate the development of shared understanding and trust, key features of ACM. Their activities also influenced changes in assessment, monitoring, and management approaches to public lands riparian area grazing, also indicative of a transition to ACM. Whereas these effects often aligned with the NRST's immediate objectives, i.e., to work through a specific issue or point of conflict, there was little evidence of long-term effects beyond the specific issue or intervention; that is, in most cases the initiative did not influence longer term changes in place-based governance and institutions. Our results suggest that the success of interventions aimed at catalyzing the transformation of governance arrangements toward ACM may hinge on factors external to the collaborative process such as the presence or absence of (1 dynamic local leadership and (2 high quality agreements regarding next steps for the group. Efforts to establish long lasting ACM institutions may also face significant constraints and barriers, including existing laws and regulations

  2. Scaleup of powder metallurgy processed Nb-Al multifilamentary wire

    International Nuclear Information System (INIS)

    Thieme, C.; Foner, S.; Otubo, J.; Pourrahimi, S.; Schwartz, B.; Zhang, H.

    1983-01-01

    Power metallurgy processed Nb-Al superconducting wires were fabricated from billets up to 45 mm o.d. with nominal areal reduction ratios, R, up to 2 X 10 5 , Nb powder sizes from 40 to 300 μm from various sources, Al powder sizes from 9 to 75 μm, Al concentrations from 3 to 25 wt % Al and with a wide range of heat treatments. All the compacts used tap density powder in a Cu tube and swaging and/or rod rolling and subsequent wire drawing. Both single strand and bundled wires were made. Overall critical current densities, J /SUB c/, of 2 X 10 4 A/cm 2 at 14 T and 10 4 A/cm 2 at 16 T were achieved for 6 to 8 wt % Al in Nb

  3. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  4. A Technical Survey on Optimization of Processing Geo Distributed Data

    Science.gov (United States)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  5. 40 CFR 761.80 - Manufacturing, processing and distribution in commerce exemptions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Manufacturing, processing and..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Exemptions § 761.80 Manufacturing, processing and... any change in the manner of processing and distributing, importing (manufacturing), or exporting of...

  6. Mistaking geography for biology: inferring processes from species distributions.

    Science.gov (United States)

    Warren, Dan L; Cardillo, Marcel; Rosauer, Dan F; Bolnick, Daniel I

    2014-10-01

    Over the past few decades, there has been a rapid proliferation of statistical methods that infer evolutionary and ecological processes from data on species distributions. These methods have led to considerable new insights, but they often fail to account for the effects of historical biogeography on present-day species distributions. Because the geography of speciation can lead to patterns of spatial and temporal autocorrelation in the distributions of species within a clade, this can result in misleading inferences about the importance of deterministic processes in generating spatial patterns of biodiversity. In this opinion article, we discuss ways in which patterns of species distributions driven by historical biogeography are often interpreted as evidence of particular evolutionary or ecological processes. We focus on three areas that are especially prone to such misinterpretations: community phylogenetics, environmental niche modelling, and analyses of beta diversity (compositional turnover of biodiversity). Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  7. Humic acid removal from aqueous solutions by peroxielectrocoagulation process

    Directory of Open Access Journals (Sweden)

    Ahmad Reza Yazdanbakhsh

    2015-06-01

    Full Text Available Background: Natural organic matter is the cause of many problems associated with water treatment such as the presence of disinfection by-products (DBPs and membrane fouling during water filtration. In this study, the performance of the peroxi-electrocoagulation process (PEP was investigated for the removal of humic acids (HAs from aqueous solutions. Methods: PEP was carried out for the removal of HA using a plexiglas reactor with a volume of 2 L and fitted with iron electrodes and a direct current supply (DC. Samples were taken at various amounts of pH (2-4, current density (1 and 2A/cm2, hydrogen peroxide (50-150 mg/L and reaction time (5-20 minutes and then filtered to remove sludge formed during reaction. Finally, the HA concentration was measured by UV absorbance at 254 nm (UV254. Results: Results indicated that increasing the concentration of H2O2 from 50 to 150 mg/L increased HA removal efficiency from 83% to 94.5%. The highest removal efficiency was observed at pH 3.0; by increasing the pH to the alkaline range, the efficiency of the process was reduced. It was found that HA removal efficiency was high in current density 1A/cm2. Increasing current density up to 1 A cm-2 caused a decrease in removal efficiency. Results of this study showed that under the optimum operating range for the process ([current density] = 1A/cm2, [hydrogen peroxide concentration] = 150 mg/L, [reaction time]= 20 minutes and [pH]= 3.0, HA removal efficiency reached 98%. Conclusion: It can be concluded that PEP has the potential to be utilized for cost-effective removal of HA from aqueous solutions.

  8. Distributed quantum information processing via quantum dot spins

    International Nuclear Information System (INIS)

    Jun, Liu; Qiong, Wang; Le-Man, Kuang; Hao-Sheng, Zeng

    2010-01-01

    We propose a scheme to engineer a non-local two-qubit phase gate between two remote quantum-dot spins. Along with one-qubit local operations, one can in principal perform various types of distributed quantum information processing. The scheme employs a photon with linearly polarisation interacting one after the other with two remote quantum-dot spins in cavities. Due to the optical spin selection rule, the photon obtains a Faraday rotation after the interaction process. By measuring the polarisation of the final output photon, a non-local two-qubit phase gate between the two remote quantum-dot spins is constituted. Our scheme may has very important applications in the distributed quantum information processing

  9. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  10. A CFD model for analysis of performance, water and thermal distribution, and mechanical related failure in PEM fuel cells

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2016-07-01

    Full Text Available This paper presents a comprehensive three–dimensional, multi–phase, non-isothermal model of a Proton Exchange Membrane (PEM fuel cell that incorporates significant physical processes and key parameters affecting the fuel cell performance. The model construction involves equations derivation, boundary conditions setting, and solution algorithm flow chart. Equations in gas flow channels, gas diffusion layers (GDLs, catalyst layers (CLs, and membrane as well as equations governing cell potential and hygro-thermal stresses are described. The algorithm flow chart starts from input of the desired cell current density, initialization, iteration of the equations solution, and finalizations by calculating the cell potential. In order to analyze performance, water and thermal distribution, and mechanical related failure in the cell, the equations are solved using a computational fluid dynamic (CFD code. Performance analysis includes a performance curve which plots the cell potential (Volt against nominal current density (A/cm2 as well as losses. Velocity vectors of gas and liquid water, liquid water saturation, and water content profile are calculated. Thermal distribution is then calculated together with hygro-thermal stresses and deformation. The CFD model was executed under boundary conditions of 20°C room temperature, 35% relative humidity, and 1 MPA pressure on the lower surface. Parameters values of membrane electrode assembly (MEA and other base conditions are selected. A cell with dimension of 1 mm x 1 mm x 50 mm is used as the object of analysis. The nominal current density of 1.4 A/cm2 is given as the input of the CFD calculation. The results show that the model represents well the performance curve obtained through experiment. Moreover, it can be concluded that the model can help in understanding complex process in the cell which is hard to be studied experimentally, and also provides computer aided tool for design and optimization of PEM

  11. Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?

    Science.gov (United States)

    Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.

    2016-02-01

    It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.

  12. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  13. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  14. Enterococcus faecium biofilm formation: identification of major autolysin AtlAEfm, associated Acm surface localization, and AtlAEfm-independent extracellular DNA Release.

    Science.gov (United States)

    Paganelli, Fernanda L; Willems, Rob J L; Jansen, Pamela; Hendrickx, Antoni; Zhang, Xinglin; Bonten, Marc J M; Leavis, Helen L

    2013-04-16

    Enterococcus faecium is an important multidrug-resistant nosocomial pathogen causing biofilm-mediated infections in patients with medical devices. Insight into E. faecium biofilm pathogenesis is pivotal for the development of new strategies to prevent and treat these infections. In several bacteria, a major autolysin is essential for extracellular DNA (eDNA) release in the biofilm matrix, contributing to biofilm attachment and stability. In this study, we identified and functionally characterized the major autolysin of E. faecium E1162 by a bioinformatic genome screen followed by insertional gene disruption of six putative autolysin genes. Insertional inactivation of locus tag EfmE1162_2692 resulted in resistance to lysis, reduced eDNA release, deficient cell attachment, decreased biofilm, decreased cell wall hydrolysis, and significant chaining compared to that of the wild type. Therefore, locus tag EfmE1162_2692 was considered the major autolysin in E. faecium and renamed atlAEfm. In addition, AtlAEfm was implicated in cell surface exposure of Acm, a virulence factor in E. faecium, and thereby facilitates binding to collagen types I and IV. This is a novel feature of enterococcal autolysins not described previously. Furthermore, we identified (and localized) autolysin-independent DNA release in E. faecium that contributes to cell-cell interactions in the atlAEfm mutant and is important for cell separation. In conclusion, AtlAEfm is the major autolysin in E. faecium and contributes to biofilm stability and Acm localization, making AtlAEfm a promising target for treatment of E. faecium biofilm-mediated infections. IMPORTANCE Nosocomial infections caused by Enterococcus faecium have rapidly increased, and treatment options have become more limited. This is due not only to increasing resistance to antibiotics but also to biofilm-associated infections. DNA is released in biofilm matrix via cell lysis, caused by autolysin, and acts as a matrix stabilizer. In this study

  15. Tempered stable distributions stochastic models for multiscale processes

    CERN Document Server

    Grabchak, Michael

    2015-01-01

    This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions.  A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.

  16. Parallel and distributed processing in power system simulation and control

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, Djalma M [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1994-12-31

    Recent advances in computer technology will certainly have a great impact in the methodologies used in power system expansion and operational planning as well as in real-time control. Parallel and distributed processing are among the new technologies that present great potential for application in these areas. Parallel computers use multiple functional or processing units to speed up computation while distributed processing computer systems are collection of computers joined together by high speed communication networks having many objectives and advantages. The paper presents some ideas for the use of parallel and distributed processing in power system simulation and control. It also comments on some of the current research work in these topics and presents a summary of the work presently being developed at COPPE. (author) 53 refs., 2 figs.

  17. Distributed Processing System for Restoration of Electric Power Distribution Network Using Two-Layered Contract Net Protocol

    Science.gov (United States)

    Kodama, Yu; Hamagami, Tomoki

    Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.

  18. Point processes and the position distribution of infinite boson systems

    International Nuclear Information System (INIS)

    Fichtner, K.H.; Freudenberg, W.

    1987-01-01

    It is shown that to each locally normal state of a boson system one can associate a point process that can be interpreted as the position distribution of the state. The point process contains all information one can get by position measurements and is determined by the latter. On the other hand, to each so-called Σ/sup c/-point process Q they relate a locally normal state with position distribution Q

  19. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  20. Central role for GSK3β in the pathogenesis of arrhythmogenic cardiomyopathy.

    Science.gov (United States)

    Chelko, Stephen P; Asimaki, Angeliki; Andersen, Peter; Bedja, Djahida; Amat-Alarcon, Nuria; DeMazumder, Deeptankar; Jasti, Ravirasmi; MacRae, Calum A; Leber, Remo; Kleber, Andre G; Saffitz, Jeffrey E; Judge, Daniel P

    2016-04-21

    Arrhythmogenic cardiomyopathy (ACM) is characterized by redistribution of junctional proteins, arrhythmias, and progressive myocardial injury. We previously reported that SB216763 (SB2), annotated as a GSK3β inhibitor, reverses disease phenotypes in a zebrafish model of ACM. Here, we show that SB2 prevents myocyte injury and cardiac dysfunction in vivo in two murine models of ACM at baseline and in response to exercise. SB2-treated mice with desmosome mutations showed improvements in ventricular ectopy and myocardial fibrosis/inflammation as compared with vehicle-treated (Veh-treated) mice. GSK3β inhibition improved left ventricle function and survival in sedentary and exercised Dsg2 mut/mut mice compared with Veh-treated Dsg2 mut/mut mice and normalized intercalated disc (ID) protein distribution in both mutant mice. GSK3β showed diffuse cytoplasmic localization in control myocytes but ID redistribution in ACM mice. Identical GSK3β redistribution is present in ACM patient myocardium but not in normal hearts or other cardiomyopathies. SB2 reduced total GSK3β protein levels but not phosphorylated Ser 9-GSK3β in ACM mice. Constitutively active GSK3β worsens ACM in mutant mice, while GSK3β shRNA silencing in ACM cardiomyocytes prevents abnormal ID protein distribution. These results highlight a central role for GSKβ in the complex phenotype of ACM and provide further evidence that pharmacologic GSKβ inhibition improves cardiomyopathies due to desmosome mutations.

  1. Achieving Uniform Carriers Distribution in MBE Grown Compositionally Graded InGaN Multiple-Quantum-Well LEDs

    KAUST Repository

    Mishra, Pawan; Janjua, Bilal; Ng, Tien Khee; Shen, Chao; Salhi, Abdelmajid; Alyamani, Ahmed; El-Desouki, Munir; Ooi, Boon S.

    2015-01-01

    We investigated the design and growth of compositionally-graded InGaN multiple quantum wells (MQW) based light-emitting diode (LED) without an electron-blocking layer (EBL). Numerical investigation showed uniform carrier distribution in the active region, and higher radiative recombination rate for the optimized graded-MQW design, i.e. In0→xGa1→(1-x)N / InxGa(1-x)N / Inx→0Ga(1-x)→1N, as compared to the conventional stepped-MQW-LED. The composition-grading schemes, such as linear, parabolic, and Fermi-function profiles were numerically investigated for comparison. The stepped- and graded-MQW-LED were then grown using plasma assisted molecular beam epitaxy (PAMBE) through surface-stoichiometry optimization based on reflection high-energy electron-diffraction (RHEED) in-situ observations. Stepped- and graded-MQW-LED showed efficiency roll over at 160 A/cm2 and 275 A/cm2, respectively. The extended threshold current density roll-over (droop) in graded-MQW-LED is due to the improvement in carrier uniformity and radiative recombination rate, consistent with the numerical simulation.

  2. Achieving Uniform Carriers Distribution in MBE Grown Compositionally Graded InGaN Multiple-Quantum-Well LEDs

    KAUST Repository

    Mishra, Pawan

    2015-05-06

    We investigated the design and growth of compositionally-graded InGaN multiple quantum wells (MQW) based light-emitting diode (LED) without an electron-blocking layer (EBL). Numerical investigation showed uniform carrier distribution in the active region, and higher radiative recombination rate for the optimized graded-MQW design, i.e. In0→xGa1→(1-x)N / InxGa(1-x)N / Inx→0Ga(1-x)→1N, as compared to the conventional stepped-MQW-LED. The composition-grading schemes, such as linear, parabolic, and Fermi-function profiles were numerically investigated for comparison. The stepped- and graded-MQW-LED were then grown using plasma assisted molecular beam epitaxy (PAMBE) through surface-stoichiometry optimization based on reflection high-energy electron-diffraction (RHEED) in-situ observations. Stepped- and graded-MQW-LED showed efficiency roll over at 160 A/cm2 and 275 A/cm2, respectively. The extended threshold current density roll-over (droop) in graded-MQW-LED is due to the improvement in carrier uniformity and radiative recombination rate, consistent with the numerical simulation.

  3. Fouling distribution in forward osmosis membrane process.

    Science.gov (United States)

    Lee, Junseok; Kim, Bongchul; Hong, Seungkwan

    2014-06-01

    Fouling behavior along the length of membrane module was systematically investigated by performing simple modeling and lab-scale experiments of forward osmosis (FO) membrane process. The flux distribution model developed in this study showed a good agreement with experimental results, validating the robustness of the model. This model demonstrated, as expected, that the permeate flux decreased along the membrane channel due to decreasing osmotic pressure differential across the FO membrane. A series of fouling experiments were conducted under the draw and feed solutions at various recoveries simulated by the model. The simulated fouling experiments revealed that higher organic (alginate) fouling and thus more flux decline were observed at the last section of a membrane channel, as foulants in feed solution became more concentrated. Furthermore, the water flux in FO process declined more severely as the recovery increased due to more foulants transported to membrane surface with elevated solute concentrations at higher recovery, which created favorable solution environments for organic adsorption. The fouling reversibility also decreased at the last section of the membrane channel, suggesting that fouling distribution on FO membrane along the module should be carefully examined to improve overall cleaning efficiency. Lastly, it was found that such fouling distribution observed with co-current flow operation became less pronounced in counter-current flow operation of FO membrane process. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  4. Additive Construction with Mobile Emplacement (ACME) / Automated Construction of Expeditionary Structures (ACES) Materials Delivery System (MDS)

    Science.gov (United States)

    Mueller, R. P.; Townsend, I. I.; Tamasy, G. J.; Evers, C. J.; Sibille, L. J.; Edmunson, J. E.; Fiske, M. R.; Fikes, J. C.; Case, M.

    2018-01-01

    The purpose of the Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) project is to incorporate the Liquid Goods Delivery System (LGDS) into the Dry Goods Delivery System (DGDS) structure to create an integrated and automated Materials Delivery System (MDS) for 3D printing structures with ordinary Portland cement (OPC) concrete. ACES 3 is a prototype for 3-D printing barracks for soldiers in forward bases, here on Earth. The LGDS supports ACES 3 by storing liquid materials, mixing recipe batches of liquid materials, and working with the Dry Goods Feed System (DGFS) previously developed for ACES 2, combining the materials that are eventually extruded out of the print nozzle. Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) is a project led by the US Army Corps of Engineers (USACE) and supported by NASA. The equivalent 3D printing system for construction in space is designated Additive Construction with Mobile Emplacement (ACME) by NASA.

  5. Resource depletion promotes automatic processing: implications for distribution of practice.

    Science.gov (United States)

    Scheel, Matthew H

    2010-12-01

    Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.

  6. The brain as a distributed intelligent processing system: an EEG study.

    Science.gov (United States)

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-03-15

    Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Wechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. The present results support these claims and the neural efficiency hypothesis.

  7. Proceedings from the Annual Army Environmental R&D Symposium (16th) Held 23-25 June 1992 at Fort Magruder Inn and Conference Center, Williamsburg, Virginia

    Science.gov (United States)

    1992-06-01

    photoactivity. The second prong was to evaluate the perovskite materials to utilize the lattice oxygen for the oxidation step and was found unsuccessful...Loalized 0 Distributed (*24) WowK damage to ACM. Q Yes 0 NO 0 Lca -Hzed 0 Distributed (25) Proximity (P) of ACM to repaWrromzine mainm uin A. Friable

  8. Mapping the Asthma Care Process: Implications for Research and Practice.

    Science.gov (United States)

    Dima, Alexandra Lelia; de Bruin, Marijn; Van Ganse, Eric

    2016-01-01

    Whether people with asthma gain and maintain control over their condition depends not only on the availability of effective drugs, but also on multiple patient and health care professional (HCP) behaviors. Research in asthma rarely considers how these behaviors interact with each other and drug effectiveness to determine health outcomes, which may limit real-life applicability of findings. The objective of this study was to develop a logic process model (Asthma Care Model; ACM) that explains how patient and HCP behaviors impact on the asthma care process. Within a European research project on asthma (ASTRO-LAB), we reviewed asthma care guidelines and empirical literature, and conducted qualitative interviews with patients and HCPs. Findings were discussed with the project team and respiratory care experts and integrated in a causal model. The model outlines a causal sequence of treatment events, from diagnosis and assessment to treatment prescription, drug exposure, and health outcomes. The relationships between these components are moderated by patient behaviors (medication adherence, symptom monitoring, managing triggers, and exacerbations) and HCP behaviors (medical care and self-management support). Modifiable and nonmodifiable behavioral determinants influence the behaviors of patients and HCPs. The model is dynamic as it includes feedback loops of behavioral and clinical outcomes, which influence future patient and HCP decision making. Key evidence for each relationship is summarized to derive research priorities and clinical recommendations. The ACM model is of interest to both researchers and practitioners, and intended as a first version (ACM-v1) of a common framework for generating and translating research evidence in asthma care. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  10. Wigner Ville Distribution in Signal Processing, using Scilab Environment

    Directory of Open Access Journals (Sweden)

    Petru Chioncel

    2011-01-01

    Full Text Available The Wigner Ville distribution offers a visual display of quantitative information about the way a signal’s energy is distributed in both, time and frequency. Through that, this distribution embodies the fundamentally concepts of the Fourier and time-domain analysis. The energy of the signal is distributed so that specific frequencies are localized in time by the group delay time and at specifics instants in time the frequency is given by the instantaneous frequency. The net positive volum of the Wigner distribution is numerically equal to the signal’s total energy. The paper shows the application of the Wigner Ville distribution, in the field of signal processing, using Scilab environment.

  11. Distributed data processing for public health surveillance

    Directory of Open Access Journals (Sweden)

    Yih Katherine

    2006-09-01

    Full Text Available Abstract Background Many systems for routine public health surveillance rely on centralized collection of potentially identifiable, individual, identifiable personal health information (PHI records. Although individual, identifiable patient records are essential for conditions for which there is mandated reporting, such as tuberculosis or sexually transmitted diseases, they are not routinely required for effective syndromic surveillance. Public concern about the routine collection of large quantities of PHI to support non-traditional public health functions may make alternative surveillance methods that do not rely on centralized identifiable PHI databases increasingly desirable. Methods The National Bioterrorism Syndromic Surveillance Demonstration Program (NDP is an example of one alternative model. All PHI in this system is initially processed within the secured infrastructure of the health care provider that collects and holds the data, using uniform software distributed and supported by the NDP. Only highly aggregated count data is transferred to the datacenter for statistical processing and display. Results Detailed, patient level information is readily available to the health care provider to elucidate signals observed in the aggregated data, or for ad hoc queries. We briefly describe the benefits and disadvantages associated with this distributed processing model for routine automated syndromic surveillance. Conclusion For well-defined surveillance requirements, the model can be successfully deployed with very low risk of inadvertent disclosure of PHI – a feature that may make participation in surveillance systems more feasible for organizations and more appealing to the individuals whose PHI they hold. It is possible to design and implement distributed systems to support non-routine public health needs if required.

  12. Cross-coherent vector sensor processing for spatially distributed glider networks.

    Science.gov (United States)

    Nichols, Brendan; Sabra, Karim G

    2015-09-01

    Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.

  13. The brain as a distributed intelligent processing system: an EEG study.

    Directory of Open Access Journals (Sweden)

    Armando Freitas da Rocha

    Full Text Available BACKGROUND: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS, first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. METHODOLOGY AND PRINCIPAL FINDINGS: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Wechsler Adult Intelligence Scale and WISC (Wechsler Intelligence Scale for Children, and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. CONCLUSION: The present results support these claims and the neural efficiency hypothesis.

  14. The Brain as a Distributed Intelligent Processing System: An EEG Study

    Science.gov (United States)

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-01-01

    Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657

  15. Multilingual Information Discovery and AccesS (MIDAS): A Joint ACM DL'99/ ACM SIGIR'99 Workshop.

    Science.gov (United States)

    Oard, Douglas; Peters, Carol; Ruiz, Miguel; Frederking, Robert; Klavans, Judith; Sheridan, Paraic

    1999-01-01

    Discusses a multidisciplinary workshop that addressed issues concerning internationally distributed information networks. Highlights include multilingual information access in media other than character-coded text; cross-language information retrieval and multilingual metadata; and evaluation of multilingual systems. (LRW)

  16. Compiling software for a hierarchical distributed processing system

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  17. Determination of material distribution in heading process of small bimetallic bar

    Science.gov (United States)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  18. Standard services for the capture, processing, and distribution of packetized telemetry data

    Science.gov (United States)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  19. Enforcement of entailment constraints in distributed service-based business processes.

    Science.gov (United States)

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web

  20. Accelerated life testing design using geometric process for pareto distribution

    OpenAIRE

    Mustafa Kamal; Shazia Zarrin; Arif Ul Islam

    2013-01-01

    In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...

  1. Transparent checkpointing and process migration in a distributed system

    OpenAIRE

    2004-01-01

    A distributed system for creating a checkpoint for a plurality of processes running on the distributed system. The distributed system includes a plurality of compute nodes with an operating system executing on each compute node. A checkpoint library resides at the user level on each of the compute nodes, and the checkpoint library is transparent to the operating system residing on the same compute node and to the other compute nodes. Each checkpoint library uses a windowed messaging logging p...

  2. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  3. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  4. A Java based environment to control and monitor distributed processing systems

    International Nuclear Information System (INIS)

    Legrand, I.C.

    1997-01-01

    Distributed processing systems are considered to solve the challenging requirements of triggering and data acquisition systems for future HEP experiments. The aim of this work is to present a software environment to control and monitor large scale parallel processing systems based on a distributed client-server approach developed in Java. One server task may control several processing nodes, switching elements or controllers for different sub-systems. Servers are designed as multi-thread applications for efficient communications with other objects. Servers communicate between themselves by using Remote Method Invocation (RMI) in a peer-to-peer mechanism. This distributed server layer has to provide a dynamic and transparent access from any client to all the resources in the system. The graphical user interface programs, which are platform independent, may be transferred to any client via the http protocol. In this scheme the control and monitor tasks are distributed among servers and network controls the flow of information among servers and clients providing a flexible mechanism for monitoring and controlling large heterogenous distributed systems. (author)

  5. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  6. Stationary distributions of stochastic processes described by a linear neutral delay differential equation

    International Nuclear Information System (INIS)

    Frank, T D

    2005-01-01

    Stationary distributions of processes are derived that involve a time delay and are defined by a linear stochastic neutral delay differential equation. The distributions are Gaussian distributions. The variances of the Gaussian distributions are either monotonically increasing or decreasing functions of the time delays. The variances become infinite when fixed points of corresponding deterministic processes become unstable. (letter to the editor)

  7. The role of silver in the processing and properties of Bi-2212

    International Nuclear Information System (INIS)

    Lang, T.; Heeb, B.; Buhl, D.

    1994-01-01

    The influence of the silver content and the oxygen partial pressure on the solidus temperature and the weight loss during melting of Bi 2 Sr 2 Ca 1 Cu 2 O x has been examined by means of DTA and TGA. By decreasing the oxygen partial pressure the solidus is lowered (e.g. triangle T=59 degrees C by decreasing pO 2 from 1 atm to 0.001 atm) and the weight loss is increased. The addition of silver causes two effects: (a) the solidus is further decreased (e.g. 2wt% Ag lower T solidus by up to 25 degrees C, depending on the oxygen partial pressure), (b) the weight loss during melting is reduced. Thick films (10-20 μm in thickness) with 0 and 5 wt% silver and bulk samples with 0 and 2.7 wt% silver were melt processed in flowing oxygen on a silver substrate in the DTA, allowing the observation of the melting process and a good temperature control. The critical current densities are vigorously dependent on the maximum processing temperature. The highest j c in thick films (8000 A/cm 2 at 77 K, O T) was reached by melting 7 degrees C above the solidus temperature. The silver addition shows no significant effect on the processing parameters or the superconducting properties. The highest j c for bulk samples (1 mm in thickness) was obtained by partial melting at 900 degrees C or 880 degrees C, depending on the silver content of the powder (0 or 2.7 wt%). The j c of the samples is slightly enhanced from 1800 A/cm 2 (at 77 K, O T) to 2000 A/cm 2 by the silver addition. To be able to reach at least 80% of the maximum critical current density, the temperature has to be controlled in a window of 5 degrees C for thick films and 17 degrees C for bulk samples

  8. Building enterprise systems with ODP an introduction to open distributed processing

    CERN Document Server

    Linington, Peter F; Tanaka, Akira; Vallecillo, Antonio

    2011-01-01

    The Reference Model of Open Distributed Processing (RM-ODP) is an international standard that provides a solid basis for describing and building widely distributed systems and applications in a systematic way. It stresses the need to build these systems with evolution in mind by identifying the concerns of major stakeholders and then expressing the design as a series of linked viewpoints. Although RM-ODP has been a standard for more than ten years, many practitioners are still unaware of it. Building Enterprise Systems with ODP: An Introduction to Open Distributed Processing offers a gentle pa

  9. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  10. Cumulative distribution functions associated with bubble-nucleation processes in cavitation

    KAUST Repository

    Watanabe, Hiroshi

    2010-11-15

    Bubble-nucleation processes of a Lennard-Jones liquid are studied by molecular dynamics simulations. Waiting time, which is the lifetime of a superheated liquid, is determined for several system sizes, and the apparent finite-size effect of the nucleation rate is observed. From the cumulative distribution function of the nucleation events, the bubble-nucleation process is found to be not a simple Poisson process but a Poisson process with an additional relaxation time. The parameters of the exponential distribution associated with the process are determined by taking the relaxation time into account, and the apparent finite-size effect is removed. These results imply that the use of the arithmetic mean of the waiting time until a bubble grows to the critical size leads to an incorrect estimation of the nucleation rate. © 2010 The American Physical Society.

  11. Syntactic processing is distributed across the language system.

    Science.gov (United States)

    Blank, Idan; Balewski, Zuzanna; Mahowald, Kyle; Fedorenko, Evelina

    2016-02-15

    Language comprehension recruits an extended set of regions in the human brain. Is syntactic processing localized to a particular region or regions within this system, or is it distributed across the entire ensemble of brain regions that support high-level linguistic processing? Evidence from aphasic patients is more consistent with the latter possibility: damage to many different language regions and to white-matter tracts connecting them has been shown to lead to similar syntactic comprehension deficits. However, brain imaging investigations of syntactic processing continue to focus on particular regions within the language system, often parts of Broca's area and regions in the posterior temporal cortex. We hypothesized that, whereas the entire language system is in fact sensitive to syntactic complexity, the effects in some regions may be difficult to detect because of the overall lower response to language stimuli. Using an individual-subjects approach to localizing the language system, shown in prior work to be more sensitive than traditional group analyses, we indeed find responses to syntactic complexity throughout this system, consistent with the findings from the neuropsychological patient literature. We speculate that such distributed nature of syntactic processing could perhaps imply that syntax is inseparable from other aspects of language comprehension (e.g., lexico-semantic processing), in line with current linguistic and psycholinguistic theories and evidence. Neuroimaging investigations of syntactic processing thus need to expand their scope to include the entire system of high-level language processing regions in order to fully understand how syntax is instantiated in the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Particularities of Verification Processes for Distributed Informatics Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2013-01-01

    Full Text Available This paper presents distributed informatics applications and characteristics of their development cycle. It defines the concept of verification and there are identified the differences from software testing. Particularities of the software testing and software verification processes are described. The verification steps and necessary conditions are presented and there are established influence factors of quality verification. Software optimality verification is analyzed and some metrics are defined for the verification process.

  13. Temperature profiles from mechanical bathythermograph (MBT) casts from the USS ACME in the North Pacific Ocean in support of the Fleet Observations of Oceanographic Data (FLOOD) project from 1968-04-05 to 1968-04-25 (NODC Accession 6800642)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — MBT data were collected from the USS ACME in support of the Fleet Observations of Oceanographic Data (FLOOD) project. Data were collected by US Navy; Ships of...

  14. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  15. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  16. Ionization processes in a transient hollow cathode discharge before electric breakdown: statistical distribution

    International Nuclear Information System (INIS)

    Zambra, M.; Favre, M.; Moreno, J.; Wyndham, E.; Chuaqui, H.; Choi, P.

    1998-01-01

    The charge formation processes in a hollow cathode region (HCR) of transient hollow cathode discharge have been studied at the final phase. The statistical distribution that describe different processes of ionization have been represented by Gaussian distributions. Nevertheless, was observed a better representation of these distributions when the pressure is near a minimum value, just before breakdown

  17. The Marginal Distributions of a Crossing Time and Renewal Numbers Related with Two Poisson Processes are as Ph-Distributions

    Directory of Open Access Journals (Sweden)

    Mir G. H. Talpur

    2006-01-01

    Full Text Available In this paper we consider, how to find the marginal distributions of crossing time and renewal numbers related with two poisson processes by using probability arguments. The obtained results show that the one-dimension marginal distributions are N+1 order PH-distributions.

  18. IPNS distributed-processing data-acquisition system

    International Nuclear Information System (INIS)

    Haumann, J.R.; Daly, R.T.; Worlton, T.G.; Crawford, R.K.

    1981-01-01

    The Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory is a major new user-oriented facility which has come on line for basic research in neutron scattering and neutron radiation damage. This paper describes the distributed-processing data-acquisition system which handles data collection and instrument control for the time-of-flight neutron-scattering instruments. The topics covered include the overall system configuration, each of the computer subsystems, communication protocols linking each computer subsystem, and an overview of the software which has been developed

  19. An Educational Tool for Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2011-01-01

    In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation of the abs......In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...... of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, user and multiuser interaction, which can hardly be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples show how to implement interactive...

  20. Flexible distributed architecture for semiconductor process control and experimentation

    Science.gov (United States)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  1. Dedifferentiation, Proliferation, and Redifferentiation of Adult Mammalian Cardiomyocytes After Ischemic Injury.

    Science.gov (United States)

    Wang, Wei Eric; Li, Liangpeng; Xia, Xuewei; Fu, Wenbin; Liao, Qiao; Lan, Cong; Yang, Dezhong; Chen, Hongmei; Yue, Rongchuan; Zeng, Cindy; Zhou, Lin; Zhou, Bin; Duan, Dayue Darrel; Chen, Xiongwen; Houser, Steven R; Zeng, Chunyu

    2017-08-29

    Adult mammalian hearts have a limited ability to generate new cardiomyocytes. Proliferation of existing adult cardiomyocytes (ACMs) is a potential source of new cardiomyocytes. Understanding the fundamental biology of ACM proliferation could be of great clinical significance for treating myocardial infarction (MI). We aim to understand the process and regulation of ACM proliferation and its role in new cardiomyocyte formation of post-MI mouse hearts. β-Actin-green fluorescent protein transgenic mice and fate-mapping Myh6-MerCreMer-tdTomato/lacZ mice were used to trace the fate of ACMs. In a coculture system with neonatal rat ventricular myocytes, ACM proliferation was documented with clear evidence of cytokinesis observed with time-lapse imaging. Cardiomyocyte proliferation in the adult mouse post-MI heart was detected by cell cycle markers and 5-ethynyl-2-deoxyuridine incorporation analysis. Echocardiography was used to measure cardiac function, and histology was performed to determine infarction size. In vitro, mononucleated and bi/multinucleated ACMs were able to proliferate at a similar rate (7.0%) in the coculture. Dedifferentiation proceeded ACM proliferation, which was followed by redifferentiation. Redifferentiation was essential to endow the daughter cells with cardiomyocyte contractile function. Intercellular propagation of Ca 2+ from contracting neonatal rat ventricular myocytes into ACM daughter cells was required to activate the Ca 2+ -dependent calcineurin-nuclear factor of activated T-cell signaling pathway to induce ACM redifferentiation. The properties of neonatal rat ventricular myocyte Ca 2+ transients influenced the rate of ACM redifferentiation. Hypoxia impaired the function of gap junctions by dephosphorylating its component protein connexin 43, the major mediator of intercellular Ca 2+ propagation between cardiomyocytes, thereby impairing ACM redifferentiation. In vivo, ACM proliferation was found primarily in the MI border zone. An ischemia

  2. On process capability and system availability analysis of the inverse Rayleigh distribution

    Directory of Open Access Journals (Sweden)

    Sajid Ali

    2015-04-01

    Full Text Available In this article, process capability and system availability analysis is discussed for the inverse Rayleigh lifetime distribution. Bayesian approach with a conjugate gamma distribution is adopted for the analysis. Different types of loss functions are considered to find Bayes estimates of the process capability and system availability. A simulation study is conducted for the comparison of different loss functions.

  3. Aspects Concerning the Optimization of Authentication Process for Distributed Applications

    Directory of Open Access Journals (Sweden)

    Ion Ivan

    2008-06-01

    Full Text Available There will be presented the types of distributed applications. The quality characteristics for distributed applications will be analyzed. There will be established the ways to assign access rights. The authentication category will be analyzed. We will propose an algorithm for the optimization of authentication process.For the application “Evaluation of TIC projects” the algorithm proposed will be applied.

  4. Radar data processing using a distributed computational system

    Science.gov (United States)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  5. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  6. Distribution of Selected Trace Elements in the Bayer Process

    Directory of Open Access Journals (Sweden)

    Johannes Vind

    2018-05-01

    Full Text Available The aim of this work was to achieve an understanding of the distribution of selected bauxite trace elements (gallium (Ga, vanadium (V, arsenic (As, chromium (Cr, rare earth elements (REEs, scandium (Sc in the Bayer process. The assessment was designed as a case study in an alumina plant in operation to provide an overview of the trace elements behaviour in an actual industrial setup. A combination of analytical techniques was used, mainly inductively coupled plasma mass spectrometry and optical emission spectroscopy as well as instrumental neutron activation analysis. It was found that Ga, V and As as well as, to a minor extent, Cr are principally accumulated in Bayer process liquors. In addition, Ga is also fractionated to alumina at the end of the Bayer processing cycle. The rest of these elements pass to bauxite residue. REEs and Sc have the tendency to remain practically unaffected in the solid phases of the Bayer process and, therefore, at least 98% of their mass is transferred to bauxite residue. The interest in such a study originates from the fact that many of these trace constituents of bauxite ore could potentially become valuable by-products of the Bayer process; therefore, the understanding of their behaviour needs to be expanded. In fact, Ga and V are already by-products of the Bayer process, but their distribution patterns have not been provided in the existing open literature.

  7. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  8. The influence of emotion on lexical processing: insights from RT distributional analysis.

    Science.gov (United States)

    Yap, Melvin J; Seow, Cui Shan

    2014-04-01

    In two lexical decision experiments, the present study was designed to examine emotional valence effects on visual lexical decision (standard and go/no-go) performance, using traditional analyses of means and distributional analyses of response times. Consistent with an earlier study by Kousta, Vinson, and Vigliocco (Cognition 112:473-481, 2009), we found that emotional words (both negative and positive) were responded to faster than neutral words. Finer-grained distributional analyses further revealed that the facilitation afforded by valence was reflected by a combination of distributional shifting and an increase in the slow tail of the distribution. This suggests that emotional valence effects in lexical decision are unlikely to be entirely mediated by early, preconscious processes, which are associated with pure distributional shifting. Instead, our results suggest a dissociation between early preconscious processes and a later, more task-specific effect that is driven by feedback from semantically rich representations.

  9. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    Science.gov (United States)

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  10. Distributed processing in receivers based on tensor for cooperative communications systems

    OpenAIRE

    Igor FlÃvio SimÃes de Sousa

    2014-01-01

    In this dissertation, we present a distributed data estimation and detection approach for the uplink of a network that uses CDMA at transmitters (users). The analyzed network can be represented by an undirected and connected graph, where the nodes use a distributed estimation algorithm based on consensus averaging to perform joint channel and symbol estimation using a receiver based on tensor signal processing. The centralized receiver, developed for a central base station, and the distribute...

  11. An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2016-01-01

    Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.

  12. Distributed Iterative Processing for Interference Channels with Receiver Cooperation

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Manchón, Carles Navarro; Bota, Vasile

    2012-01-01

    We propose a method for the design and evaluation of distributed iterative algorithms for receiver cooperation in interference-limited wireless systems. Our approach views the processing within and collaboration between receivers as the solution to an inference problem in the probabilistic model...

  13. A role for distributed processing in advanced nuclear materials control and accountability systems

    International Nuclear Information System (INIS)

    Tisinger, R.M.; Whitty, W.J.; Ford, W.; Strittmatter, R.B.

    1986-01-01

    Networking and distributed processing hardware and software have the potential of greatly enhancing nuclear materials control and account-ability (MCandA) systems, both from safeguards and process operations perspectives while allowing timely integrated safeguards activities and enhanced computer security at reasonable cost. A hierarchical distributed system is proposed consisting of groups of terminals and instruments in plant production and support areas connected to microprocessors that are connected to either larger microprocessors or minicomputers. The structuring and development of a limited distributed MCandA prototype system, including human engineering concepts, are described. Implications of integrated safeguards and computer security concepts to the distributed system design are discussed

  14. Learning from the History of Distributed Query Processing

    DEFF Research Database (Denmark)

    Betz, Heiko; Gropengießer, Francis; Hose, Katja

    2012-01-01

    The vision of the Semantic Web has triggered the development of various new applications and opened up new directions in research. Recently, much effort has been put into the development of techniques for query processing over Linked Data. Being based upon techniques originally developed...... for distributed and federated databases, some of them inherit the same or similar problems. Thus, the goal of this paper is to point out pitfalls that the previous generation of researchers has already encountered and to introduce the Linked Data as a Service as an idea that has the potential to solve the problem...... in some scenarios. Hence, this paper discusses nine theses about Linked Data processing and sketches a research agenda for future endeavors in the area of Linked Data processing....

  15. Source localization of intermittent rhythmic delta activity in a patient with acute confusional migraine: cross-spectral analysis using standardized low-resolution brain electromagnetic tomography (sLORETA).

    Science.gov (United States)

    Kim, Dae-Eun; Shin, Jung-Hyun; Kim, Young-Hoon; Eom, Tae-Hoon; Kim, Sung-Hun; Kim, Jung-Min

    2016-01-01

    Acute confusional migraine (ACM) shows typical electroencephalography (EEG) patterns of diffuse delta slowing and frontal intermittent rhythmic delta activity (FIRDA). The pathophysiology of ACM is still unclear but these patterns suggest neuronal dysfunction in specific brain areas. We performed source localization analysis of IRDA (in the frequency band of 1-3.5 Hz) to better understand the ACM mechanism. Typical IRDA EEG patterns were recorded in a patient with ACM during the acute stage. A second EEG was obtained after recovery from ACM. To identify source localization of IRDA, statistical non-parametric mapping using standardized low-resolution brain electromagnetic tomography was performed for the delta frequency band comparisons between ACM attack and non-attack periods. A difference in the current density maximum was found in the dorsal anterior cingulated cortex (ACC). The significant differences were widely distributed over the frontal, parietal, temporal and limbic lobe, paracentral lobule and insula and were predominant in the left hemisphere. Dorsal ACC dysfunction was demonstrated for the first time in a patient with ACM in this source localization analysis of IRDA. The ACC plays an important role in the frontal attentional control system and acute confusion. This dysfunction of the dorsal ACC might represent an important ACM pathophysiology.

  16. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  17. A multicopper oxidase is essential for manganese oxidation and laccase-like activity in Pedomicrobium sp. ACM 3067.

    Science.gov (United States)

    Ridge, Justin P; Lin, Marianne; Larsen, Eloise I; Fegan, Mark; McEwan, Alastair G; Sly, Lindsay I

    2007-04-01

    Pedomicrobium sp. ACM 3067 is a budding-hyphal bacterium belonging to the alpha-Proteobacteria which is able to oxidize soluble Mn2+ to insoluble manganese oxide. A cosmid, from a whole-genome library, containing the putative genes responsible for manganese oxidation was identified and a primer-walking approach yielded 4350 bp of novel sequence. Analysis of this sequence showed the presence of a predicted three-gene operon, moxCBA. The moxA gene product showed homology to multicopper oxidases (MCOs) and contained the characteristic four copper-binding motifs (A, B, C and D) common to MCOs. An insertion mutation of moxA showed that this gene was essential for both manganese oxidation and laccase-like activity. The moxB gene product showed homology to a family of outer membrane proteins which are essential for Type I secretion in Gram-negative bacteria. moxBA has not been observed in other manganese-oxidizing bacteria but homologues were identified in the genomes of several bacteria including Sinorhizobium meliloti 1021 and Agrobacterium tumefaciens C58. These results suggest that moxBA and its homologues constitute a family of genes encoding an MCO and a predicted component of the Type I secretion system.

  18. Sulfidization of an aluminocobaltomolybdenum catalyst using the 35S radioisotope

    International Nuclear Information System (INIS)

    Isagulyants, G.V.; Greish, A.A.; Kogan, V.M.

    1987-01-01

    It has been established that in aluminocobaltomolybdenum catalyst sulfidized with elemental sulfur there are two types of sulfur, free and bound. The maximum amount of bound sulfur in ACM catalyst is 6.6 wt. %, which corresponds to practically complete sulfidation of the ACM catalyst. In the presence of hydrogen an equilibrium distribution of bound sulfur is achieved in a granule of ACM catalyst irrespective of the temperature of sulfidation. In a nitrogen atmosphere it is primarily the surface layers of the catalyst that are sulfured

  19. Human factors in computing systems: focus on patient-centered health communication at the ACM SIGCHI conference.

    Science.gov (United States)

    Wilcox, Lauren; Patel, Rupa; Chen, Yunan; Shachak, Aviv

    2013-12-01

    Health Information Technologies, such as electronic health records (EHR) and secure messaging, have already transformed interactions among patients and clinicians. In addition, technologies supporting asynchronous communication outside of clinical encounters, such as email, SMS, and patient portals, are being increasingly used for follow-up, education, and data reporting. Meanwhile, patients are increasingly adopting personal tools to track various aspects of health status and therapeutic progress, wishing to review these data with clinicians during consultations. These issues have drawn increasing interest from the human-computer interaction (HCI) community, with special focus on critical challenges in patient-centered interactions and design opportunities that can address these challenges. We saw this community presenting and interacting at the ACM SIGCHI 2013, Conference on Human Factors in Computing Systems, (also known as CHI), held April 27-May 2nd, 2013 at the Palais de Congrès de Paris in France. CHI 2013 featured many formal avenues to pursue patient-centered health communication: a well-attended workshop, tracks of original research, and a lively panel discussion. In this report, we highlight these events and the main themes we identified. We hope that it will help bring the health care communication and the HCI communities closer together. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Distribution of chirality in the quantum walk: Markov process and entanglement

    International Nuclear Information System (INIS)

    Romanelli, Alejandro

    2010-01-01

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible to find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.

  1. Distributed Processing of SETI Data

    Science.gov (United States)

    Korpela, Eric

    As you have read in prior chapters, researchers have been performing progressively more sensitive SETI searches since 1960. Each search has been limited by the technologies available at the time. As radio frequency technologies have become more efficient and computers have become faster, the searches have increased in capacity and become more sensitive. Often the limits of the hardware that performs the calculations required to process the telescope data in order to expose any embedded signals is what limits the sensitivity of the search. Shortly before the start of the 21st century, projects began to appear that exploited the processing capabilities of computers connected to the Internet in order to solve problems that required a large amount of computing power. The SETI@home project, managed by myself and a group of researchers at the Space Sciences Laboratory of the University of California, Berkeley, was the first attempt to use large-scale distributed computing to solve the problems of performing a sensitive search for narrow band radio signals from extraterrestrial civilizations. (Korpela et al., 2001) A follow-on project, Astropulse, searches for extraterrestrial signals with wider bandwidths and shorter time durations. Both projects are ongoing at the present time (mid-2010).

  2. Equivalence of functional limit theorems for stationary point processes and their Palm distributions

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1989-01-01

    Let P be the distribution of a stationary point process on the real line and let P0 be its Palm distribution. In this paper we consider two types of functional limit theorems, those in terms of the number of points of the point process in (0, t] and those in terms of the location of the nth point

  3. Just-in-time Data Distribution for Analytical Query Processing

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); M.L. Kersten (Martin); F.E. Groffen (Fabian)

    2012-01-01

    textabstract Distributed processing commonly requires data spread across machines using a priori static or hash-based data allocation. In this paper, we explore an alternative approach that starts from a master node in control of the complete database, and a variable number of worker nodes

  4. Benchmarking Distributed Stream Processing Platforms for IoT Applications

    OpenAIRE

    Shukla, Anshu; Simmhan, Yogesh

    2016-01-01

    Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, envi- ronmental and human systems in real-time. The inherent closed-loop re- sponsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing plat- forms. Distributed Stream Processing Systems (DSPS) are becoming es- sential components of any IoT stack, but the efficacy and performance of contemporary DSP...

  5. Parallel Distributed Processing theory in the age of deep networks

    OpenAIRE

    Bowers, Jeffrey

    2017-01-01

    Parallel Distributed Processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely, that all knowledge is coded in a distributed format, and cognition is mediated by non-symbolic computations. These claims have long been debated within cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks le...

  6. Process parameters, orientation, and functional properties of melt-processed bulk Y-Ba-Cu-O superconductors

    International Nuclear Information System (INIS)

    Zakharchenko, I.V.; Terryll, K.M.; Rao, K.V.; Balachandran, U.

    1995-03-01

    This study compared the microstructure, texturing, and functional properties (critical currents) of YBa 2 Cu 3 O 7-x -based bulk pellets that were prepared by the quench-melt-growth-process (QMGP), melt-textured growth (MTG), and conventional solid-state reaction (SSR) approaches. Using two X-ray diffraction (XRD) methods, θ-2θ, and rocking curves, the authors found that the individual grains of two melt-processed pellets exhibited remarkable preferred orientational alignment (best rocking curve width = 3.2 degree). However, the direction of the preferred orientation among the grains was random. Among the three types of bulk materials studied, the QMGP sample was found to have the best J c values, ∼ 4,500 A/cm 2 at 77 K in a field of 2 kG, as determined from SQUID magnetic data

  7. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  8. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    Science.gov (United States)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  9. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  10. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  11. On relation between distribution functions in hard and soft processes

    International Nuclear Information System (INIS)

    Kisselev, A.V.; Petrov, V.A.

    1992-10-01

    It is shown that in the particle-exchange model the hadron-hadron scattering amplitude admits parton-like representation with the distribution functions coinciding with those extracted from deep inelastic processes. (author). 13 refs

  12. The Analysis of process optimization during the loading distribution test for steam turbine

    International Nuclear Information System (INIS)

    Li Jiangwei; Cao Yuhua; Li Dawei

    2014-01-01

    The loading distribution of steam turbine needs six times to complete in total, the first time is completed when the turbine cylinder buckles, the rest must be completed orderly in the process of installing GVP pipe. To complete 5 tests of loading distribution and installation of GVP pipe, it usually takes around 90 days for most nuclear plants while the unit l of Fuqing Nuclear Power Station compress it into about 45 days by optimizing the installation process. this article describes the successful experience of how the Unit l of Fuqing Nuclear Power Station finished 5 tests of loading distribution and installation of GVP pipe in 45 days by optimizing the process, Meanwhile they analysis the advantages and disadvantages through comparing it with the process provide by suppliers, which brings up some rationalization proposals for installation work to the follow-up units of our plant. (authors)

  13. Reaction Mechanism and Distribution Behavior of Arsenic in the Bottom Blown Copper Smelting Process

    Directory of Open Access Journals (Sweden)

    Qinmeng Wang

    2017-08-01

    Full Text Available The control of arsenic, a toxic and carcinogenic element, is an important issue for all copper smelters. In this work, the reaction mechanism and distribution behavior of arsenic in the bottom blown copper smelting process (SKS process were investigated and compared to the flash smelting process. There are obvious differences of arsenic distribution in the SKS process and flash process, resulting from the differences of oxygen potentials, volatilizations, smelting temperatures, reaction intensities, and mass transfer processes. Under stable production conditions, the distributions of arsenic among matte, slag, and gas phases are 6%, 12%, and 82%, respectively. Less arsenic is reported in the gas phase with the flash process than with the SKS process. The main arsenic species in gas phase are AsS (g, AsO (g, and As2 (g. Arsenic exists in the slag predominantly as As2O3 (l, and in matte as As (l. High matte grade is harmful to the elimination of arsenic to gas. The changing of Fe/SiO2 has slight effects on the distributions of arsenic. In order to enhance the removal of arsenic from the SKS smelting system to the gas phase, low oxygen concentration, low ratios of oxygen/ore, and low matte grade should be chosen. In the SKS smelting process, no dust is recycled, and almost all dust is collected and further treated to eliminate arsenic and recover valuable metals by other process streams.

  14. Importance of controlling the Tl-oxide partial pressure throughout the processing of TlBa2CaCu2O7 thin films

    International Nuclear Information System (INIS)

    Siegal, M.P.; Venturini, E.L.; Newcomer, P.P.; Overmyer, D.L.; Dominguez, F.; Dunn, R.

    1995-01-01

    TlBa 2 CaCu 2 O 7 (Tl-1212) superconducting films 5000--6000 A thick have been grown on LaAlO 3 (100) substrates using oxide precursors in a closed two-zone thallination furnace. Tl-1212 films can be grown with transition temperatures ∼100 K, and critical current densities measured by magnetization of J cm (5 K)>10 7 A/cm 2 and J cm (77 K)>10 5 A/cm 2 . Processing conditions, substrate temperatures and Tl-oxide source temperatures are found which result in smooth, nearly phase-pure Tl-1212 films. Variations in the respective temperature ramps of the Tl-oxide zone and the substrate zone can greatly influence resulting film properties such as microstructure, morphology, superconducting transition temperature, and critical current density. copyright 1995 American Institute of Physics

  15. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    Science.gov (United States)

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  16. Managing Distributed Innovation Processes in Virtual Organizations by Applying the Collaborative Network Relationship Analysis

    Science.gov (United States)

    Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter

    Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.

  17. A NEW INITIATIVE FOR TILING, STITCHING AND PROCESSING GEOSPATIAL BIG DATA IN DISTRIBUTED COMPUTING ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    A. Olasz

    2016-06-01

    Full Text Available Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/ developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu. The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows written in different development frameworks (e.g. Python, R or C#. It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  18. a New Initiative for Tiling, Stitching and Processing Geospatial Big Data in Distributed Computing Environments

    Science.gov (United States)

    Olasz, A.; Nguyen Thai, B.; Kristóf, D.

    2016-06-01

    Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  19. Characterization of the inhomogeneous barrier distribution in a Pt/(100)β-Ga2O3 Schottky diode via its temperature-dependent electrical properties

    Science.gov (United States)

    Jian, Guangzhong; He, Qiming; Mu, Wenxiang; Fu, Bo; Dong, Hang; Qin, Yuan; Zhang, Ying; Xue, Huiwen; Long, Shibing; Jia, Zhitai; Lv, Hangbing; Liu, Qi; Tao, Xutang; Liu, Ming

    2018-01-01

    β-Ga2O3 is an ultra-wide bandgap semiconductor with applications in power electronic devices. Revealing the transport characteristics of β-Ga2O3 devices at various temperatures is important for improving device performance and reliability. In this study, we fabricated a Pt/β-Ga2O3 Schottky barrier diode with good performance characteristics, such as a low ON-resistance, high forward current, and a large rectification ratio. Its temperature-dependent current-voltage and capacitance-voltage characteristics were measured at various temperatures. The characteristic diode parameters were derived using thermionic emission theory. The ideality factor n was found to decrease from 2.57 to 1.16 while the zero-bias barrier height Φb0 increased from 0.47 V to 1.00 V when the temperature was increased from 125 K to 350 K. This was explained by the Gaussian distribution of barrier height inhomogeneity. The mean barrier height Φ ¯ b0 = 1.27 V and zero-bias standard deviation σ0 = 0.13 V were obtained. A modified Richardson plot gave a Richardson constant A* of 36.02 A.cm-2.K-2, which is close to the theoretical value of 41.11 A.cm-2.K-2. The differences between the barrier heights determined using the capacitance-voltage and current-voltage curves were also in line with the Gaussian distribution of barrier height inhomogeneity.

  20. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  1. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  2. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  3. Modelling spatiotemporal distribution patterns of earthworms in order to indicate hydrological soil processes

    Science.gov (United States)

    Palm, Juliane; Klaus, Julian; van Schaik, Loes; Zehe, Erwin; Schröder, Boris

    2010-05-01

    Soils provide central ecosystem functions in recycling nutrients, detoxifying harmful chemicals as well as regulating microclimate and local hydrological processes. The internal regulation of these functions and therefore the development of healthy and fertile soils mainly depend on the functional diversity of plants and animals. Soil organisms drive essential processes such as litter decomposition, nutrient cycling, water dynamics, and soil structure formation. Disturbances by different soil management practices (e.g., soil tillage, fertilization, pesticide application) affect the distribution and abundance of soil organisms and hence influence regulating processes. The strong relationship between environmental conditions and soil organisms gives us the opportunity to link spatiotemporal distribution patterns of indicator species with the potential provision of essential soil processes on different scales. Earthworms are key organisms for soil function and affect, among other things, water dynamics and solute transport in soils. Through their burrowing activity, earthworms increase the number of macropores by building semi-permanent burrow systems. In the unsaturated zone, earthworm burrows act as preferential flow pathways and affect water infiltration, surface-, subsurface- and matrix flow as well as the transport of water and solutes into deeper soil layers. Thereby different ecological earthworm types have different importance. Deep burrowing anecic earthworm species (e.g., Lumbricus terrestris) affect the vertical flow and thus increase the risk of potential contamination of ground water with agrochemicals. In contrast, horizontal burrowing endogeic (e.g., Aporrectodea caliginosa) and epigeic species (e.g., Lumbricus rubellus) increase water conductivity and the diffuse distribution of water and solutes in the upper soil layers. The question which processes are more relevant is pivotal for soil management and risk assessment. Thus, finding relevant

  4. Distribution and interplay of geologic processes on Titan from Cassini radar data

    Science.gov (United States)

    Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.

    2010-01-01

    The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ???350 m to ???2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30??), with no dunes being present above 60??. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30?? and 60?? north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the

  5. Acemetacin cocrystals and salts: structure solution from powder X-ray data and form selection of the piperazine salt.

    Science.gov (United States)

    Sanphui, Palash; Bolla, Geetha; Nangia, Ashwini; Chernyshev, Vladimir

    2014-03-01

    Acemetacin (ACM) is a non-steroidal anti-inflammatory drug (NSAID), which causes reduced gastric damage compared with indomethacin. However, acemetacin has a tendency to form a less soluble hydrate in the aqueous medium. We noted difficulties in the preparation of cocrystals and salts of acemetacin by mechanochemical methods, because this drug tends to form a hydrate during any kind of solution-based processing. With the objective to discover a solid form of acemetacin that is stable in the aqueous medium, binary adducts were prepared by the melt method to avoid hydration. The coformers/salt formers reported are pyridine carboxamides [nicotinamide (NAM), isonicotinamide (INA), and picolinamide (PAM)], caprolactam (CPR), p-aminobenzoic acid (PABA), and piperazine (PPZ). The structures of an ACM-INA cocrystal and a binary adduct ACM-PABA were solved using single-crystal X-ray diffraction. Other ACM cocrystals, ACM-PAM and ACM-CPR, and the piperazine salt ACM-PPZ were solved from high-resolution powder X-ray diffraction data. The ACM-INA cocrystal is sustained by the acid⋯pyridine heterosynthon and N-H⋯O catemer hydrogen bonds involving the amide group. The acid⋯amide heterosynthon is present in the ACM-PAM cocrystal, while ACM-CPR contains carboxamide dimers of caprolactam along with acid-carbonyl (ACM) hydrogen bonds. The cocrystals ACM-INA, ACM-PAM and ACM-CPR are three-dimensional isostructural. The carboxyl⋯carboxyl synthon in ACM-PABA posed difficulty in assigning the position of the H atom, which may indicate proton disorder. In terms of stability, the salts were found to be relatively stable in pH 7 buffer medium over 24 h, but the cocrystals dissociated to give ACM hydrate during the same time period. The ACM-PPZ salt and ACM-nicotinamide cocrystal dissolve five times faster than the stable hydrate form, whereas the ACM-PABA adduct has 2.5 times faster dissolution rate. The pharmaceutically acceptable piperazine salt of acemetacin exhibits superior

  6. CMT scaling analysis and distortion evaluation in passive integral test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Wang Han; Chang Huajian

    2013-01-01

    Core makeup tank (CMT) is the crucial device of AP1000 passive core cooling system, and reasonable scaling analysis of CMT plays a key role in the design of passive integral test facilities. H2TS method was used to perform scaling analysis for both circulating mode and draining mode of CMT. And then, the similarity criteria for CMT important processes were applied in the CMT scaling design of the ACME (advanced core-cooling mechanism experiment) facility now being built in China. Furthermore, the scaling distortion results of CMT characteristic Ⅱ groups of ACME were calculated. At last, the reason of scaling distortion was analyzed and the distortion evaluation was conducted for ACME facility. The dominant processes of CMT circulating mode can be adequately simulated in the ACME facility, but the steam condensation process during CMT draining is not well preserved because the excessive CMT mass leads to more energy to be absorbed by cold metal. However, comprehensive analysis indicates that the ACME facility with high-pressure simulation scheme is able to properly represent CMT's important phenomena and processes of prototype nuclear plant. (authors)

  7. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  8. Bridging the gap between a stationary point process and its Palm distribution

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1994-01-01

    In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to

  9. Analysing the Outbound logistics process enhancements in Nokia-Siemens Networks Global Distribution Center

    OpenAIRE

    Marjeta, Katri

    2011-01-01

    Marjeta, Katri. 2011. Analysing the outbound logistics process enhancements in Nokia-Siemens Networks Global Distribution Center. Master´s thesis. Kemi-Tornio University of Applied Sciences. Business and Culture. Pages 57. Due to confidentiality issues, this work has been modified from its original form. The aim of this Master Thesis work is to describe and analyze the outbound logistics process enhancement projects executed in Nokia-Siemens Networks Global Distribution Center after the N...

  10. A distributed process monitoring system for nuclear powered electrical generating facilities

    International Nuclear Information System (INIS)

    Sweney, A.D.

    1991-01-01

    Duke Power Company is one of the largest investor owned utilities in the United States, with a service area of 20,000 square miles extending across North and South Carolina. Oconee Nuclear Station, one of Duke Power's three nuclear generating facilities, is a three unit pressurized water reactor site and has, over the course of its 15-year operating lifetime, effectively run out of plant processing capability. From a severely overcrowded cable spread room to an aging overtaxed Operator Aid Computer, the problems with trying to add additional process variables to the present centralized Operator Aid Computer are almost insurmountable obstacles. This paper reports that for this reason, and to realize the inherent benefits of a distributed process monitoring and control system, Oconee has embarked on a project to demonstrate the ability of a distributed system to perform in the nuclear power plant environment

  11. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  12. Application of signal processing techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Raza, Safdar; Mokhlis, Hazlie; Arof, Hamzah; Laghari, J.A.; Wang, Li

    2015-01-01

    Highlights: • Pros & cons of conventional islanding detection techniques (IDTs) are discussed. • Signal processing techniques (SPTs) ability in detecting islanding is discussed. • SPTs ability in improving performance of passive techniques are discussed. • Fourier, s-transform, wavelet, HHT & tt-transform based IDTs are reviewed. • Intelligent classifiers (ANN, ANFIS, Fuzzy, SVM) application in SPT are discussed. - Abstract: High penetration of distributed generation resources (DGR) in distribution network provides many benefits in terms of high power quality, efficiency, and low carbon emissions in power system. However, efficient islanding detection and immediate disconnection of DGR is critical in order to avoid equipment damage, grid protection interference, and personnel safety hazards. Islanding detection techniques are mainly classified into remote, passive, active, and hybrid techniques. From these, passive techniques are more advantageous due to lower power quality degradation, lower cost, and widespread usage by power utilities. However, the main limitations of these techniques are that they possess a large non detection zones and require threshold setting. Various signal processing techniques and intelligent classifiers have been used to overcome the limitations of passive islanding. Signal processing techniques, in particular, are adopted due to their versatility, stability, cost effectiveness, and ease of modification. This paper presents a comprehensive overview of signal processing techniques used to improve common passive islanding detection techniques. A performance comparison between the signal processing based islanding detection techniques with existing techniques are also provided. Finally, this paper outlines the relative advantages and limitations of the signal processing techniques in order to provide basic guidelines for researchers and field engineers in determining the best method for their system

  13. Cumulative distribution functions associated with bubble-nucleation processes in cavitation

    KAUST Repository

    Watanabe, Hiroshi; Suzuki, Masaru; Ito, Nobuyasu

    2010-01-01

    of the exponential distribution associated with the process are determined by taking the relaxation time into account, and the apparent finite-size effect is removed. These results imply that the use of the arithmetic mean of the waiting time until a bubble grows

  14. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  15. Recent changes in flood damage in the United States from observations and ACME model

    Science.gov (United States)

    Leng, G.; Leung, L. R.

    2017-12-01

    Despite efforts to mitigate flood hazards in flood-prone areas, survey- and report-based flood databases show that flood damage has increased and emerged as one of the most costly disaster in the United States since the 1990s. Understanding the mechanism driving the changes in flood damage is therefore critical for reducing flood risk. In this study, we first conduct a comprehensive analysis of the changing characteristics of flood damage at local, state and country level. Results show a significant increasing trend in the number of flood hazards, causing economic losses of up to $7 billion per year. The ratio of flood events that caused tangible economical cost to the total flood events has exhibited a non-significant increasing trend before 2007 followed by a significant decrease, indicating a changing vulnerability to floods. Analysis also reveals distinct spatial and temporal patterns in the threshold intensity of flood hazards with tangible economical cost. To understand the mechanism behind the increasing flood damage, we develop a flood damage economic model coupled with the integrated hydrological modeling system of ACME that features a river routing model with an inundation parameterization and a water use and regulation model. The model is evaluated over the country against historical records. Several numerical experiments are then designed to explore the mechanisms behind the recent changes in flood damage from the perspective of flood hazard, exposure and vulnerability, which constitute flood damage. The role of human activities such as reservoir operations and water use in modifying regional floods are also explored using the new tool, with the goal of improving understanding and modeling of vulnerability to flood hazards.

  16. Problem of uniqueness in the renewal process generated by the uniform distribution

    Directory of Open Access Journals (Sweden)

    D. Ugrin-Šparac

    1992-01-01

    Full Text Available The renewal process generated by the uniform distribution, when interpreted as a transformation of the uniform distribution into a discrete distribution, gives rise to the question of uniqueness of the inverse image. The paper deals with a particular problem from the described domain, that arose in the construction of a complex stochastic test intended to evaluate pseudo-random number generators. The connection of the treated problem with the question of a unique integral representation of Gamma-function is also mentioned.

  17. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery

    OpenAIRE

    Grumbach, Stéphane; Riemann, Robert

    2017-01-01

    International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...

  18. Distributed real time data processing architecture for the TJ-II data acquisition system

    International Nuclear Information System (INIS)

    Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Vega, J.; Sanchez, E.

    2004-01-01

    This article describes the performance of a new model of architecture that has been developed for the TJ-II data acquisition system in order to increase its real time data processing capabilities. The current model consists of several compact PCI extension for instrumentation (PXI) standard chassis, each one with various digitizers. In this architecture, the data processing capability is restricted to the PXI controller's own performance. The controller must share its CPU resources between the data processing and the data acquisition tasks. In the new model, distributed data processing architecture has been developed. The solution adds one or more processing cards to each PXI chassis. This way it is possible to plan how to distribute the data processing of all acquired signals among the processing cards and the available resources of the PXI controller. This model allows scalability of the system. More or less processing cards can be added based on the requirements of the system. The processing algorithms are implemented in LabVIEW (from National Instruments), providing efficiency and time-saving application development when compared with other efficient solutions

  19. Distributed system for parallel data processing of ECT signals for electromagnetic flaw detection in materials

    International Nuclear Information System (INIS)

    Guliashki, Vassil; Marinova, Galia

    2002-01-01

    The paper proposes a distributed system for parallel data processing of ECT signals for flaw detection in materials. The measured data are stored in files on a host computer, where a JAVA server is located. The host computer is connected through Internet to a set of client computers, distributed geographically. The data are distributed from the host computer by means of the JAVA server to the client computers according their requests. The software necessary for the data processing is installed on each client computer in advance. The organization of the data processing on many computers, working simultaneously in parallel, leads to great time reducing, especially in cases when huge amount of data should be processed in very short time. (Author)

  20. AKaplan-Meier estimators of distance distributions for spatial point processes

    NARCIS (Netherlands)

    Baddeley, A.J.; Gill, R.D.

    1997-01-01

    When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit

  1. Design of distributed systems of hydrolithospere processes management. Selection of optimal number of extracting wells

    Science.gov (United States)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The article considers the important issue of designing the distributed systems of hydrolithospere processes management. Control effects on the hydrolithospere processes are implemented by a set of extractive wells. The article shows how to determine the optimal number of extractive wells that provide a distributed control impact on the management object.

  2. Comparison of Environment Impact between Conventional and Cold Chain Management System in Paprika Distribution Process

    Directory of Open Access Journals (Sweden)

    Eidelweijs A Putri

    2012-09-01

    Full Text Available Pasir Langu village in Cisarua, West Java, is the largest central production area of paprika in Indonesia. On average, for every 200 kilograms of paprika produced, there is rejection amounting to 3 kilograms. This resulted in money loss for wholesalers and wastes. In one year, this amount can be approximately 11.7 million Indonesian rupiahs. Recently, paprika wholesalers in Pasir Langu village recently are developing cold chain management system to maintain quality of paprika so that number of rejection can be reduced. The objective of this study is to compare environmental impacts between conventional and cold chain management system in paprika distribution process using Life Cycle Assessment (LCA methodology and propose Photovoltaic (PV system in paprika distribution process. The result implies that the cold chain system produces more CO2 emission compared to conventional system. However, due to the promotion of PV system, the emission would be reduced. For future research, it is necessary to reduce CO2 emission from transportation process since this process is biggest contributor of CO2 emission at whole distribution process. Keywords: LCA, environmentally friendly distribution, paprika,cold chain, PV system

  3. The Cronus Distributed DBMS (Database Management System) Project

    Science.gov (United States)

    1989-10-01

    projects, e.g., HiPAC [Dayal 88] and Postgres [Stonebraker 86]. Although we expect to use these techniques, they have been developed for centralized...Computing Systems, June 1989. (To appear). [Stonebraker 86] Stonebraker, M. and Rowe, L. A., "The Design of POSTGRES ," Proceedings ACM SIGMOD Annual

  4. Temperature effect of irradiated target surface on distribution of nanoparticles formed by implantation

    CERN Document Server

    Stepanov, A L; Popok, V N

    2001-01-01

    The composition layers, containing the metal nanoparticles, synthesized thorough implantation of the Ag sup + ions with the energy of 60 keV and the dose of 3 x 10 sup 1 sup 6 ion/cm sup 2 into the sodium-calcium silicate glass by the ion current of 3 mu A/cm sup 2 and the sublayer temperature of 35 deg C are studied. The obtained implantation results are analyzed in dependence on the temperature effects, developing for the glass samples of various thickness. The data on the silver distribution, the metal nanoparticles formation and growth by depth are obtained from the optical reflection spectra. It is demonstrated that minor changes in the surface temperature of the irradiated glass sublayer lead to noticeable diversities in the regularities of the nanoparticles formation in the sample volume

  5. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing

    Directory of Open Access Journals (Sweden)

    Samsinar Riza

    2018-01-01

    Full Text Available The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  6. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing)

    Science.gov (United States)

    Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi

    2018-02-01

    The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  7. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...

  8. The constitutive distributed parameter model of multicomponent chemical processes in gas, fluid and solid phase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    In the literature of distributed parameter modelling of real processes is not considered the class of multicomponent chemical processes in gas, fluid and solid phase. The aim of paper is constitutive distributed parameter physicochemical model, constructed on kinetics and phenomenal analysis of multicomponent chemical processes in gas, fluid and solid phase. The mass, energy and momentum aspects of these multicomponent chemical reactions and adequate phenomena are utilized in balance operations, by conditions of: constitutive invariance for continuous media with space and time memories, reciprocity principle for isotropic and anisotropic nonhomogeneous media with space and time memories, application of definitions of following derivative and equation of continuity, to the construction of systems of partial differential constitutive state equations, in the following derivative forms for gas, fluid and solid phase. Couched in this way all physicochemical conditions of multicomponent chemical processes in gas, fluid and solid phase are new form of constitutive distributed parameter model for automatics and its systems of equations are new form of systems of partial differential constitutive state equations in sense of phenomenal distributed parameter control

  9. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    Energy Technology Data Exchange (ETDEWEB)

    Dobay, M. P. D., E-mail: maria.pamela.david@physik.uni-muenchen.de; Alberola, A. Piera; Mendoza, E. R.; Raedler, J. O., E-mail: joachim.raedler@physik.uni-muenchen.de [Ludwig-Maximilians University, Faculty of Physics, Center for NanoScience (Germany)

    2012-03-15

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  10. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    International Nuclear Information System (INIS)

    Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.

    2012-01-01

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  11. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    Science.gov (United States)

    Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.

    2012-03-01

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  12. An Effective Framework for Distributed Geospatial Query Processing in Grids

    Directory of Open Access Journals (Sweden)

    CHEN, B.

    2010-08-01

    Full Text Available The emergence of Internet has greatly revolutionized the way that geospatial information is collected, managed, processed and integrated. There are several important research issues to be addressed for distributed geospatial applications. First, the performance of geospatial applications is needed to be considered in the Internet environment. In this regard, the Grid as an effective distributed computing paradigm is a good choice. The Grid uses a series of middleware to interconnect and merge various distributed resources into a super-computer with capability of high performance computation. Secondly, it is necessary to ensure the secure use of independent geospatial applications in the Internet environment. The Grid just provides the utility of secure access to distributed geospatial resources. Additionally, it makes good sense to overcome the heterogeneity between individual geospatial information systems in Internet. The Open Geospatial Consortium (OGC proposes a number of generalized geospatial standards e.g. OGC Web Services (OWS to achieve interoperable access to geospatial applications. The OWS solution is feasible and widely adopted by both the academic community and the industry community. Therefore, we propose an integrated framework by incorporating OWS standards into Grids. Upon the framework distributed geospatial queries can be performed in an interoperable, high-performance and secure Grid environment.

  13. An educational tool for interactive parallel and distributed processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2012-01-01

    In this article we try to describe how the modular interactive tiles system (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing a handson educational tool that allows a change in the representation...... of abstract problems related to designing interactive parallel and distributed systems. Indeed, the MITS seems to bring a series of goals into education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, and user and multi-user interaction which can rarely be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples we show how to implement...

  14. Bubble size distribution analysis and control in high frequency ultrasonic cleaning processes

    International Nuclear Information System (INIS)

    Hauptmann, M; Struyf, H; Mertens, P; Heyns, M; Gendt, S De; Brems, S; Glorieux, C

    2012-01-01

    In the semiconductor industry, the ongoing down-scaling of nanoelectronic elements has lead to an increasing complexity of their fabrication. Hence, the individual fabrication processes become increasingly difficult to handle. To minimize cross-contamination, intermediate surface cleaning and preparation steps are inevitable parts of the semiconductor process chain. Here, one major challenge is the removal of residual nano-particulate contamination resulting from abrasive processes such as polishing and etching. In the past, physical cleaning techniques such as megasonic cleaning have been proposed as suitable solutions. However, the soaring fragility of the smallest structures is constraining the forces of the involved physical removal mechanisms. In the case of 'megasonic' cleaning –cleaning with ultrasound in the MHz-domain – the main cleaning action arises from strongly oscillating microbubbles which emerge from the periodically changing tensile strain in the cleaning liquid during sonication. These bubbles grow, oscillate and collapse due to a complex interplay of rectified diffusion, bubble coalescence, non-linear pulsation and the onset of shape instabilities. Hence, the resulting bubble size distribution does not remain static but alternates continuously. Only microbubbles in this distribution that show a high oscillatory response are responsible for the cleaning action. Therefore, the cleaning process efficiency can be improved by keeping the majority of bubbles around their resonance size. In this paper, we propose a method to control and characterize the bubble size distribution by means of 'pulsed' sonication and measurements of acoustic cavitation spectra, respectively. We show that the so-obtained bubble size distributions can be related to theoretical predictions of the oscillatory responses of and the onset of shape instabilities for the respective bubbles. We also propose a mechanism to explain the enhancement of both acoustic and cleaning

  15. A trial of distributed portable data acquisition and processing system implementation: the qdpb - data processing with branchpoints

    International Nuclear Information System (INIS)

    Gritsaj, K.I.; Isupov, A.Yu.

    2001-01-01

    A trial of distributed portable data acquisition and processing system qdpb is issued. An experimental setup data and hardware dependent code is separated from the generic part of the qdpb system. The generic part implementation is described

  16. Functional and ophthalmoscopic observations in human laser accident cases using scanning-laser ophthalmoscopy

    Science.gov (United States)

    Zwick, Harry; Lund, David J.; Gagliano, Donald A.; Stuck, Bruce E.

    1994-06-01

    A scanning laser ophthalmoscope (SLO) equipped with an acousto- optical modulator (ACM) was used to make focal acuity and contrast sensitivity measurements in individuals with macular damage. The depth of modulation achieved by the ACM was determined by imaging the SLO raster pattern onto a Pulnix TM 745 video camera and evaluating the intensity distribution with a Big Sky BVA10 beam view analyzer. Contrast levels remained approximately constant over the entire range of SLO input raster power settings. A delta Technologies image processing system produced Landolt ring test stimuli at the center of the raster pattern. Contrast thresholds were determined at various retinal locations by having subjects fixate a specific location on a fixed grid imaged on the raster pattern. This procedure insured that the test stimuli were always imaged in the center of the raster pattern thereby avoiding peripheral variations in the raster pattern intensity distribution. Measurements of contrast sensitivity where focal test targets fell within the macular damage area demonstrated elevated contrast thresholds relative to retinal locations where focal test targets evaluated the border regions between normal and pathological retina.

  17. Distributed Sensing and Processing Adaptive Collaboration Environment (D-SPACE)

    Science.gov (United States)

    2014-07-01

    RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL/RI 11. SPONSOR/MONITOR’S REPORT NUMBER AFRL-RI-RS-TR-2014-195 12...cloud” technologies are not appropriate for situation understanding in areas of denial, where computation resources are limited, data not easily...graph matching process. D-SPACE distributes graph exploitation among a network of autonomous computational resources, designs the collaboration policy

  18. Heat and work distributions for mixed Gauss–Cauchy process

    International Nuclear Information System (INIS)

    Kuśmierz, Łukasz; Gudowska-Nowak, Ewa; Rubi, J Miguel

    2014-01-01

    We analyze energetics of a non-Gaussian process described by a stochastic differential equation of the Langevin type. The process represents a paradigmatic model of a nonequilibrium system subject to thermal fluctuations and additional external noise, with both sources of perturbations considered as additive and statistically independent forcings. We define thermodynamic quantities for trajectories of the process and analyze contributions to mechanical work and heat. As a working example we consider a particle subjected to a drag force and two statistically independent Lévy white noises with stability indices α = 2 and α = 1. The fluctuations of dissipated energy (heat) and distribution of work performed by the force acting on the system are addressed by examining contributions of Cauchy fluctuations (α = 1) to either bath or external force acting on the system. (paper)

  19. Dust charging processes with a Cairns-Tsallis distribution function with negative ions

    Energy Technology Data Exchange (ETDEWEB)

    Abid, A. A., E-mail: abidaliabid1@hotmail.com [Applied Physics Department, Federal Urdu University of Arts, Science and Technology, Islamabad Campus, Islamabad 45320 (Pakistan); Khan, M. Z., E-mail: mzk-qau@yahoo.com [Applied Physics Department, Federal Urdu University of Arts, Science and Technology, Islamabad Campus, Islamabad 45320 (Pakistan); Plasma Technology Research Center, Department of Physics, Faculty of Science, University of Malaya, Kuala Lumpur 50603 (Malaysia); Yap, S. L. [Plasma Technology Research Center, Department of Physics, Faculty of Science, University of Malaya, Kuala Lumpur 50603 (Malaysia); Terças, H., E-mail: hugo.tercas@tecnico.ul.pt [Physics of Information Group, Instituto de Telecomunicações, Av. Rovisco Pais, Lisbon 1049-001 (Portugal); Mahmood, S. [Science Place, University of Saskatchewan, Saskatoon, Saskatchewan S7N5A2 (Canada)

    2016-01-15

    Dust grain charging processes are presented in a non-Maxwellian dusty plasma following the Cairns-Tsallis (q, α)–distribution, whose constituents are the electrons, as well as the positive/negative ions and negatively charged dust grains. For this purpose, we have solved the current balance equation for a negatively charged dust grain to achieve an equilibrium state value (viz., q{sub d} = constant) in the presence of Cairns-Tsallis (q, α)–distribution. In fact, the current balance equation becomes modified due to the Boltzmannian/streaming distributed negative ions. It is numerically found that the relevant plasma parameters, such as the spectral indexes q and α, the positive ion-to-electron temperature ratio, and the negative ion streaming speed (U{sub 0}) significantly affect the dust grain surface potential. It is also shown that in the limit q → 1 the Cairns-Tsallis reduces to the Cairns distribution; for α = 0 the Cairns-Tsallis distribution reduces to pure Tsallis distribution and the latter reduces to Maxwellian distribution for q → 1 and α = 0.

  20. Dust charging processes with a Cairns-Tsallis distribution function with negative ions

    International Nuclear Information System (INIS)

    Abid, A. A.; Khan, M. Z.; Yap, S. L.; Terças, H.; Mahmood, S.

    2016-01-01

    Dust grain charging processes are presented in a non-Maxwellian dusty plasma following the Cairns-Tsallis (q, α)–distribution, whose constituents are the electrons, as well as the positive/negative ions and negatively charged dust grains. For this purpose, we have solved the current balance equation for a negatively charged dust grain to achieve an equilibrium state value (viz., q d  = constant) in the presence of Cairns-Tsallis (q, α)–distribution. In fact, the current balance equation becomes modified due to the Boltzmannian/streaming distributed negative ions. It is numerically found that the relevant plasma parameters, such as the spectral indexes q and α, the positive ion-to-electron temperature ratio, and the negative ion streaming speed (U 0 ) significantly affect the dust grain surface potential. It is also shown that in the limit q → 1 the Cairns-Tsallis reduces to the Cairns distribution; for α = 0 the Cairns-Tsallis distribution reduces to pure Tsallis distribution and the latter reduces to Maxwellian distribution for q → 1 and α = 0

  1. Phase distribution measurements in narrow rectangular channels using image processing techniques

    International Nuclear Information System (INIS)

    Bentley, C.; Ruggles, A.

    1991-01-01

    Many high flux research reactor fuel assemblies are cooled by systems of parallel narrow rectangular channels. The HFIR is cooled by single phase forced convection under normal operating conditions. However, two-phase forced convection or two phase mixed convection can occur in the fueled region as a result of some hypothetical accidents. Such flow conditions would occur only at decay power levels. The system pressure would be around 0.15 MPa in such circumstances. Phase distribution of air-water flow in a narrow rectangular channel is examined using image processing techniques. Ink is added to the water and clear channel walls are used to allow high speed still photographs and video tape to be taken of the air-water flow field. Flow field images are digitized and stored in a Macintosh 2ci computer using a frame grabber board. Local grey levels are related to liquid thickness in the flow channel using a calibration fixture. Image processing shareware is used to calculate the spatially averaged liquid thickness from the image of the flow field. Time averaged spatial liquid distributions are calculated using image calculation algorithms. The spatially averaged liquid distribution is calculated from the time averaged spatial liquid distribution to formulate the combined temporally and spatially averaged fraction values. The temporally and spatially averaged liquid fractions measured using this technique compare well to those predicted from pressure gradient measurements at zero superficial liquid velocity

  2. ARM Airborne Carbon Measurements (ARM-ACME) and ARM-ACME 2.5 Final Campaign Reports

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tom, M. S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sweeney, C. [NOAA Earth Systems Research Lab., Boulder, CO (United States)

    2016-01-01

    We report on a 5-year multi-institution and multi-agency airborne study of atmospheric composition and carbon cycling at the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) site, with scientific objectives that are central to the carbon-cycle and radiative-forcing goals of the U.S. Global Change Research Program and the North American Carbon Program (NACP). The goal of these measurements is to improve understanding of 1) the carbon exchange of the Atmospheric Radiation Measurement (ARM) SGP region; 2) how CO2 and associated water and energy fluxes influence radiative-forcing, convective processes, and CO2 concentrations over the ARM SGP region, and 3) how greenhouse gases are transported on continental scales.

  3. A Process Model for Goal-Based Information Retrieval

    Directory of Open Access Journals (Sweden)

    Harvey Hyman

    2014-12-01

    Full Text Available In this paper we examine the domain of information search and propose a "goal-based" approach to study search strategy. We describe "goal-based information search" using a framework of Knowledge Discovery. We identify two Information Retrieval (IR goals using the constructs of Knowledge Acquisition (KA and Knowledge Explanation (KE. We classify these constructs into two specific information problems: An exploration-exploitation problem and an implicit-explicit problem. Our proposed framework is an extension of prior work in this domain, applying an IR Process Model originally developed for Legal-IR and adapted to Medical-IR. The approach in this paper is guided by the recent ACM-SIG Medical Information Retrieval (MedIR Workshop definition: "methodologies and technologies that seek to improve access to medical information archives via a process of information retrieval."

  4. Nitrogen-doped porous carbon monoliths from polyacrylonitrile (PAN) and carbon nanotubes as electrodes for supercapacitors

    Science.gov (United States)

    Wang, Yanqing; Fugetsu, Bunshi; Wang, Zhipeng; Gong, Wei; Sakata, Ichiro; Morimoto, Shingo; Hashimoto, Yoshio; Endo, Morinobu; Dresselhaus, Mildred; Terrones, Mauricio

    2017-01-01

    Nitrogen-doped porous activated carbon monoliths (NDP-ACMs) have long been the most desirable materials for supercapacitors. Unique to the conventional template based Lewis acid/base activation methods, herein, we report on a simple yet practicable novel approach to production of the three-dimensional NDP-ACMs (3D-NDP-ACMs). Polyacrylonitrile (PAN) contained carbon nanotubes (CNTs), being pre-dispersed into a tubular level of dispersions, were used as the starting material and the 3D-NDP-ACMs were obtained via a template-free process. First, a continuous mesoporous PAN/CNT based 3D monolith was established by using a template-free temperature-induced phase separation (TTPS). Second, a nitrogen-doped 3D-ACM with a surface area of 613.8 m2/g and a pore volume 0.366 cm3/g was obtained. A typical supercapacitor with our 3D-NDP-ACMs as the functioning electrodes gave a specific capacitance stabilized at 216 F/g even after 3000 cycles, demonstrating the advantageous performance of the PAN/CNT based 3D-NDP-ACMs. PMID:28074847

  5. Nitrogen-doped porous carbon monoliths from polyacrylonitrile (PAN) and carbon nanotubes as electrodes for supercapacitors.

    Science.gov (United States)

    Wang, Yanqing; Fugetsu, Bunshi; Wang, Zhipeng; Gong, Wei; Sakata, Ichiro; Morimoto, Shingo; Hashimoto, Yoshio; Endo, Morinobu; Dresselhaus, Mildred; Terrones, Mauricio

    2017-01-11

    Nitrogen-doped porous activated carbon monoliths (NDP-ACMs) have long been the most desirable materials for supercapacitors. Unique to the conventional template based Lewis acid/base activation methods, herein, we report on a simple yet practicable novel approach to production of the three-dimensional NDP-ACMs (3D-NDP-ACMs). Polyacrylonitrile (PAN) contained carbon nanotubes (CNTs), being pre-dispersed into a tubular level of dispersions, were used as the starting material and the 3D-NDP-ACMs were obtained via a template-free process. First, a continuous mesoporous PAN/CNT based 3D monolith was established by using a template-free temperature-induced phase separation (TTPS). Second, a nitrogen-doped 3D-ACM with a surface area of 613.8 m 2 /g and a pore volume 0.366 cm 3 /g was obtained. A typical supercapacitor with our 3D-NDP-ACMs as the functioning electrodes gave a specific capacitance stabilized at 216 F/g even after 3000 cycles, demonstrating the advantageous performance of the PAN/CNT based 3D-NDP-ACMs.

  6. DISCO - A concept of a system for integrated data base management in distributed data processing systems

    International Nuclear Information System (INIS)

    Holler, E.

    1980-01-01

    The development in data processing technology favors the trend towards distributed data processing systems: The large-scale integration of semiconductor devices has lead to very efficient (approx. 10 6 operations per second) and relatively cheap low end computers being offered today, that allow to install distributed data processing systems with a total capacity coming near to that of large-scale data processing plants at a tolerable investment expenditure. The technologies of communication and data banks, each by itself, have reached a state of development justifying their routine application. This is made evident by the present efforts for standardization in both areas. The integration of both technologies in the development of systems for integrated distributed data bank management, however, is new territory for engineering. (orig.) [de

  7. Fuel distribution process risk analysis in East Borneo

    Directory of Open Access Journals (Sweden)

    Laksmita Raizsa

    2018-01-01

    Full Text Available Fuel distribution is an important aspect of fulfilling the customer’s need. It is risky because it can cause tardiness that can cause fuel scarcity. In the process of distribution, many risks are occurring. House of Risk is a method used for mitigating the risk. It identifies seven risk events and nine risk agents. Matrix occurrence and severity are used for eliminating the minor impact risk. House of Risk 1 is used for determining the Aggregate Risk Potential (ARP. Pareto diagram is applied to prioritize risk that must be mitigated by preventive actions based on ARP. It identifies 4 priority risks, namely A8 (Car trouble, A4 (Human Error, A3 (Error deposit via bank and underpayment, and A6 (traffic accident which should be mitigated. House of Risk 2 makes for mapping between the preventive action and risk agent. It gets the Effectiveness to Difficulty Ratio (ETD for mitigating action. Conducting safety talk routine once every three days with ETD 2088 is the primary preventive actions.

  8. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  9. Distributed resource management across process boundaries

    KAUST Repository

    Suresh, Lalith

    2017-09-27

    Multi-tenant distributed systems composed of small services, such as Service-oriented Architectures (SOAs) and Micro-services, raise new challenges in attaining high performance and efficient resource utilization. In these systems, a request execution spans tens to thousands of processes, and the execution paths and resource demands on different services are generally not known when a request first enters the system. In this paper, we highlight the fundamental challenges of regulating load and scheduling in SOAs while meeting end-to-end performance objectives on metrics of concern to both tenants and operators. We design Wisp, a framework for building SOAs that transparently adapts rate limiters and request schedulers system-wide according to operator policies to satisfy end-to-end goals while responding to changing system conditions. In evaluations against production as well as synthetic workloads, Wisp successfully enforces a range of end-to-end performance objectives, such as reducing average latencies, meeting deadlines, providing fairness and isolation, and avoiding system overload.

  10. Distributed resource management across process boundaries

    KAUST Repository

    Suresh, Lalith; Bodik, Peter; Menache, Ishai; Canini, Marco; Ciucu, Florin

    2017-01-01

    Multi-tenant distributed systems composed of small services, such as Service-oriented Architectures (SOAs) and Micro-services, raise new challenges in attaining high performance and efficient resource utilization. In these systems, a request execution spans tens to thousands of processes, and the execution paths and resource demands on different services are generally not known when a request first enters the system. In this paper, we highlight the fundamental challenges of regulating load and scheduling in SOAs while meeting end-to-end performance objectives on metrics of concern to both tenants and operators. We design Wisp, a framework for building SOAs that transparently adapts rate limiters and request schedulers system-wide according to operator policies to satisfy end-to-end goals while responding to changing system conditions. In evaluations against production as well as synthetic workloads, Wisp successfully enforces a range of end-to-end performance objectives, such as reducing average latencies, meeting deadlines, providing fairness and isolation, and avoiding system overload.

  11. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  12. Reduced thermal budget processing of Y-Ba-Cu-O films by rapid isothermal processing assisted metalorganic chemical vapor deposition

    International Nuclear Information System (INIS)

    Singh, R.; Sinha, S.; Hsu, N.J.; Ng, J.T.C.; Chou, P.; Thakur, R.P.S.; Narayan, J.

    1991-01-01

    Metalorganic chemical vapor deposition (MOCVD) has the potential of emerging as a viable technique to fabricate ribbons, tapes, coated wires, and the deposition of films of high-temperature superconductors, and related materials. As a reduced thermal budget processing technique, rapid isothermal processing (RIP) based on incoherent radiation as the source of energy can be usefully coupled to conventional MOCVD. In this paper we report on the deposition and characterization of high quality superconducting thin films of Y-Ba-Cu-O (YBCO) on yttrium stabilized zirconia substrates by RIP assisted MOCVD. Using O 2 gas as the source of oxygen, YBCO films deposited initially at 600 degree C for 1 min and at 745 degree C for 25 min followed by deposition at 780 degree C for 45 s are primarily c-axis oriented and zero resistance is observed at 89--90 K. The zero magnetic field current density at 53 and 77 K are 1.2x10 6 and 3x10 5 A/cm 2 , respectively. By using a mixture of N 2 O and O 2 as the oxygen source substrate temperature was further reduced in the deposition of YBCO films. The films deposited initially at 600 degree C for 1 min and than at 720 degree C for 30 min are c-axis oriented and with zero resistance being observed at 91 K. The zero magnetic field current densities at 53 and 77 K are 3.4x10 6 and 1.2x10 6 A/cm 2 , respectively. To the best of our knowledge this is the highest value of critical current density, J c for films deposited by MOCVD at a substrate temperature as low as 720 degree C. It is envisioned that high energy photons from the incoherent light source and the use of a mixture of N 2 O and O 2 as the oxygen source, assist chemical reactions and lower overall thermal budget for processing of these films

  13. Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC

    Science.gov (United States)

    Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.

    2016-12-01

    The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud

  14. TCP (truncated compound Poisson) process for multiplicity distributions in high energy collisions

    International Nuclear Information System (INIS)

    Srivastave, P.P.

    1990-01-01

    On using the Poisson distribution truncated at zero for intermediate cluster decay in a compound Poisson process, the authors obtain TCP distribution which describes quite well the multiplicity distributions in high energy collisions. A detailed comparison is made between TCP and NB for UA5 data. The reduced moments up to the fifth agree very well with the observed ones. The TCP curves are narrower than NB at high multiplicity tail, look narrower at very high energy and develop shoulders and oscillations which become increasingly pronounced as the energy grows. At lower energies the distributions, of the data for fixed intervals of rapidity for UA5 data and for the data (at low energy) for e + e - annihilation and pion-proton, proton-proton and muon-proton scattering. A discussion of compound Poisson distribution, expression of reduced moments and Poisson transforms are also given. The TCP curves and curves of the reduced moments for different values of the parameters are also presented

  15. Variation in recombination frequency and distribution across eukaryotes: patterns and processes

    Science.gov (United States)

    Feulner, Philine G. D.; Johnston, Susan E.; Santure, Anna W.; Smadja, Carole M.

    2017-01-01

    Recombination, the exchange of DNA between maternal and paternal chromosomes during meiosis, is an essential feature of sexual reproduction in nearly all multicellular organisms. While the role of recombination in the evolution of sex has received theoretical and empirical attention, less is known about how recombination rate itself evolves and what influence this has on evolutionary processes within sexually reproducing organisms. Here, we explore the patterns of, and processes governing recombination in eukaryotes. We summarize patterns of variation, integrating current knowledge with an analysis of linkage map data in 353 organisms. We then discuss proximate and ultimate processes governing recombination rate variation and consider how these influence evolutionary processes. Genome-wide recombination rates (cM/Mb) can vary more than tenfold across eukaryotes, and there is large variation in the distribution of recombination events across closely related taxa, populations and individuals. We discuss how variation in rate and distribution relates to genome architecture, genetic and epigenetic mechanisms, sex, environmental perturbations and variable selective pressures. There has been great progress in determining the molecular mechanisms governing recombination, and with the continued development of new modelling and empirical approaches, there is now also great opportunity to further our understanding of how and why recombination rate varies. This article is part of the themed issue ‘Evolutionary causes and consequences of recombination rate variation in sexual organisms’. PMID:29109219

  16. Progress in scale-up of second-generation HTS conductor

    International Nuclear Information System (INIS)

    Selvamanickam, V.; Chen, Y.; Xiong, X.; Xie, Y.; Zhang, X.; Qiao, Y.; Reeves, J.; Rar, A.; Schmidt, R.; Lenseth, K.

    2007-01-01

    Tremendous progress has been recently made in the achievement of high-performance, high-speed, long-length second-generation (2G) HTS conductors. Using ion beam assisted deposition (IBAD) MgO and metal organic chemical vapor deposition (MOCVD), SuperPower has scaled up tape lengths to 427 m with a minimum critical current value of 191 A/cm corresponding to a critical current x length performance of 81,550 m. Tape speeds up to 120 m/h have been reached with IBAD MgO, up to 80 m/h with buffer deposition and up to 45 m/h with MOCVD, all in single pass processing of 12 mm wide tape. Critical current value of 227 A/cm has been achieved in a 203 m long tape produced in an all-high-speed fabrication process. Critical current values have been raised to 721 A/cm, 592 A/cm and 486 A/cm in short, reel-to-reel processed tape, over 1 m length and over 11.1 m, respectively, using thicker MOCVD HTS films. Finally, over 10,000 m of copper-stabilized, 4 mm wide conductor has been produced and tested for delivery to the Albany Cable project. The average critical current of the 10,000 m lot was 81 A

  17. Progress in scale-up of second-generation HTS conductor

    Energy Technology Data Exchange (ETDEWEB)

    Selvamanickam, V. [SuperPower Inc., 450 Duane Avenue, Schenectady, NY 12304 (United States)], E-mail: vselva@igc.com; Chen, Y.; Xiong, X.; Xie, Y.; Zhang, X.; Qiao, Y.; Reeves, J.; Rar, A.; Schmidt, R.; Lenseth, K. [SuperPower Inc., 450 Duane Avenue, Schenectady, NY 12304 (United States)

    2007-10-01

    Tremendous progress has been recently made in the achievement of high-performance, high-speed, long-length second-generation (2G) HTS conductors. Using ion beam assisted deposition (IBAD) MgO and metal organic chemical vapor deposition (MOCVD), SuperPower has scaled up tape lengths to 427 m with a minimum critical current value of 191 A/cm corresponding to a critical current x length performance of 81,550 m. Tape speeds up to 120 m/h have been reached with IBAD MgO, up to 80 m/h with buffer deposition and up to 45 m/h with MOCVD, all in single pass processing of 12 mm wide tape. Critical current value of 227 A/cm has been achieved in a 203 m long tape produced in an all-high-speed fabrication process. Critical current values have been raised to 721 A/cm, 592 A/cm and 486 A/cm in short, reel-to-reel processed tape, over 1 m length and over 11.1 m, respectively, using thicker MOCVD HTS films. Finally, over 10,000 m of copper-stabilized, 4 mm wide conductor has been produced and tested for delivery to the Albany Cable project. The average critical current of the 10,000 m lot was 81 A.

  18. Gene tree rooting methods give distributions that mimic the coalescent process.

    Science.gov (United States)

    Tian, Yuan; Kubatko, Laura S

    2014-01-01

    Multi-locus phylogenetic inference is commonly carried out via models that incorporate the coalescent process to model the possibility that incomplete lineage sorting leads to incongruence between gene trees and the species tree. An interesting question that arises in this context is whether data "fit" the coalescent model. Previous work (Rosenfeld et al., 2012) has suggested that rooting of gene trees may account for variation in empirical data that has been previously attributed to the coalescent process. We examine this possibility using simulated data. We show that, in the case of four taxa, the distribution of gene trees observed from rooting estimated gene trees with either the molecular clock or with outgroup rooting can be closely matched by the distribution predicted by the coalescent model with specific choices of species tree branch lengths. We apply commonly-used coalescent-based methods of species tree inference to assess their performance in these situations. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Design and simulation of parallel and distributed architectures for images processing

    International Nuclear Information System (INIS)

    Pirson, Alain

    1990-01-01

    The exploitation of visual information requires special computers. The diversity of operations and the Computing power involved bring about structures founded on the concepts of concurrency and distributed processing. This work identifies a vision computer with an association of dedicated intelligent entities, exchanging messages according to the model of parallelism introduced by the language Occam. It puts forward an architecture of the 'enriched processor network' type. It consists of a classical multiprocessor structure where each node is provided with specific devices. These devices perform processing tasks as well as inter-nodes dialogues. Such an architecture benefits from the homogeneity of multiprocessor networks and the power of dedicated resources. Its implementation corresponds to that of a distributed structure, tasks being allocated to each Computing element. This approach culminates in an original architecture called ATILA. This modular structure is based on a transputer network supplied with vision dedicated co-processors and powerful communication devices. (author) [fr

  20. Ruptura de Tendones Extensores de Dedos por Cubito Plus Idiopático Bilateral

    Directory of Open Access Journals (Sweden)

    Gustavo Alberto Breglia

    2012-11-01

    Full Text Available Background: Hyaline cartilage has only a very restricted capability of regeneration in the adult. The incidence of chondral lesions at the knee is high, especially those of Grade II/III (Outerbridge. Therapies combining cells and biological scaffolds are promising biological approaches for the treatment of cartilage defects. The aim of this study is to analyze the characteristics of in vitro culture of human chondrocytes on decellularized amniochorionic membrane (ACM. Methods: Between December 2010 and December 2011, 16 samples of cartilage from a living donor were processed, but only 7 of them were analyzed. Chondrocytes were grown and amplified on plastic and on ACM. The following analyses were carried out with those cells: interactions between cells and ACM; ACM capacity as a matrix for cells; and behavior of cells cultured on ACM. Results: In vitro chondrocytes exhibited phenotypic changes in the presence of ACM. The cells were able to adhere and remain on the spongy region of the membrane. Electron microscopy of cultured ACM showed cells, well preserved organelles, endoplasmic reticulum and desmosomes junctions. Conclusions: The feasibility of culturing chondrocytes on ACM was shown in this work. The cells were able to adhere, remain and differentiate on this membrane during the study period.

  1. The CANDU 9 distributed control system design process

    International Nuclear Information System (INIS)

    Harber, J.E.; Kattan, M.K.; Macbeth, M.J.

    1997-01-01

    Canadian designed CANDU pressurized heavy water nuclear reactors have been world leaders in electrical power generation. The CANDU 9 project is AECL's next reactor design. Plant control for the CANDU 9 station design is performed by a distributed control system (DCS) as compared to centralized control computers, analog control devices and relay logic used in previous CANDU designs. The selection of a DCS as the platform to perform the process control functions and most of the data acquisition of the plant, is consistent with the evolutionary nature of the CANDU technology. The control strategies for the DCS control programs are based on previous CANDU designs but are implemented on a new hardware platform taking advantage of advances in computer technology. This paper describes the design process for developing the CANDU 9 DCS. Various design activities, prototyping and analyses have been undertaken in order to ensure a safe, functional, and cost-effective design. (author)

  2. Parallel Distributed Processing Theory in the Age of Deep Networks.

    Science.gov (United States)

    Bowers, Jeffrey S

    2017-12-01

    Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.

  3. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    Science.gov (United States)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  4. Investigation of the fabrication process of hot-worked stainless-steel and Mo sheathed PbMo6 S8 wires

    International Nuclear Information System (INIS)

    Yamasaki, H.; Kimura, Y.

    1988-01-01

    Stainless-steel and Mo sheathed PbMo 6 S 8 wires have been fabricated by hot working from modified PbS, Mo, and MoS 2 mixed powders which were prepared by reacting Pb, Mo, and S at 530 0 C. Critical current densities were investigated for different preparation conditions, and it is revealed that obtaining continuous current path between PbMo 6 S 8 grains is the most important factor to achieve high critical current density. The J/sub c/ value of 2.8 x 10 4 Acm 2 (8 T), 7.8 x 10 3 Acm 2 (15 T), and 1.3 x 10 3 Acm 2 (23 T) was observed for the PbMo 6 S/sub 7.0/ wire heat treated at 700 0 C.copic

  5. Distributed and cooperative task processing: Cournot oligopolies on a graph.

    Science.gov (United States)

    Pavlic, Theodore P; Passino, Kevin M

    2014-06-01

    This paper introduces a novel framework for the design of distributed agents that must complete externally generated tasks but also can volunteer to process tasks encountered by other agents. To reduce the computational and communication burden of coordination between agents to perfectly balance load around the network, the agents adjust their volunteering propensity asynchronously within a fictitious trading economy. This economy provides incentives for nontrivial levels of volunteering for remote tasks, and thus load is shared. Moreover, the combined effects of diminishing marginal returns and network topology lead to competitive equilibria that have task reallocations that are qualitatively similar to what is expected in a load-balancing system with explicit coordination between nodes. In the paper, topological and algorithmic conditions are given that ensure the existence and uniqueness of a competitive equilibrium. Additionally, a decentralized distributed gradient-ascent algorithm is given that is guaranteed to converge to this equilibrium while not causing any node to over-volunteer beyond its maximum task-processing rate. The framework is applied to an autonomous-air-vehicle example, and connections are drawn to classic studies of the evolution of cooperation in nature.

  6. Palantiri: a distributed real-time database system for process control

    International Nuclear Information System (INIS)

    Tummers, B.J.; Heubers, W.P.J.

    1992-01-01

    The medium-energy accelerator MEA, located in Amsterdam, is controlled by a heterogeneous computer network. A large real-time database contains the parameters involved in the control of the accelerator and the experiments. This database system was implemented about ten years ago and has since been extended several times. In response to increased needs the database system has been redesigned. The new database environment, as described in this paper, consists out of two new concepts: (1) A Palantir which is a per machine process that stores the locally declared data and forwards all non local requests for data access to the appropriate machine. It acts as a storage device for data and a looking glass upon the world. (2) Golems: working units that define the data within the Palantir, and that have knowledge of the hardware they control. Applications access the data of a Golem by name (which do resemble Unix path names). The palantir that runs on the same machine as the application handles the distribution of access requests. This paper focuses on the Palantir concept as a distributed data storage and event handling device for process control. (author)

  7. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    Science.gov (United States)

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  8. Distributed process control system for remote control and monitoring of the TFTR tritium systems

    International Nuclear Information System (INIS)

    Schobert, G.; Arnold, N.; Bashore, D.; Mika, R.; Oliaro, G.

    1989-01-01

    This paper reviews the progress made in the application of a commercially available distributed process control system to support the requirements established for the Tritium REmote Control And Monitoring System (TRECAMS) of the Tokamak Fusion Test REactor (TFTR). The system that will discussed was purchased from Texas (TI) Instruments Automation Controls Division), previously marketed by Rexnord Automation. It consists of three, fully redundant, distributed process controllers interfaced to over 1800 analog and digital I/O points. The operator consoles located throughout the facility are supported by four Digital Equipment Corporation (DEC) PDP-11/73 computers. The PDP-11/73's and the three process controllers communicate over a fully redundant one megabaud fiber optic network. All system functionality is based on a set of completely integrated databases loaded to the process controllers and the PDP-11/73's. (author). 2 refs.; 2 figs

  9. Design and simulation for real-time distributed processing systems

    International Nuclear Information System (INIS)

    Legrand, I.C.; Gellrich, A.; Gensah, U.; Leich, H.; Wegner, P.

    1996-01-01

    The aim of this work is to provide a proper framework for the simulation and the optimization of the event building, the on-line third level trigger, and complete event reconstruction processor farm for the future HERA-B experiment. A discrete event, process oriented, simulation developed in concurrent μC++ is used for modelling the farm nodes running with multi-tasking constraints and different types of switching elements and digital signal processors interconnected for distributing the data through the system. An adequate graphic interface to the simulation part which allows to monitor features on-line and to analyze trace files, provides a powerful development tool for evaluating and designing parallel processing architectures. Control software and data flow protocols for event building and dynamic processor allocation are presented for two architectural models. (author)

  10. Software/hardware distributed processing network supporting the Ada environment

    Science.gov (United States)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  11. Quantifying evenly distributed states in exclusion and nonexclusion processes

    Science.gov (United States)

    Binder, Benjamin J.; Landman, Kerry A.

    2011-04-01

    Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Pólya distribution. New measures are developed which indicate whether or not a spatial data set, generated from an exclusion process, is at its most evenly distributed state, the complete spatial randomness (CSR) state. To this end, we define an index in terms of the variance between the bin counts. Limiting values of the index are determined when objects have access to the entire domain and when there are subregions of the domain that are inaccessible to objects. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and verify that our theoretical CSR limit accurately predicts the state of the system. These measures should prove useful in many biological applications.

  12. Spatial Data Exploring by Satellite Image Distributed Processing

    Science.gov (United States)

    Mihon, V. D.; Colceriu, V.; Bektas, F.; Allenbach, K.; Gvilava, M.; Gorgan, D.

    2012-04-01

    Our society needs and environmental predictions encourage the applications development, oriented on supervising and analyzing different Earth Science related phenomena. Satellite images could be explored for discovering information concerning land cover, hydrology, air quality, and water and soil pollution. Spatial and environment related data could be acquired by imagery classification consisting of data mining throughout the multispectral bands. The process takes in account a large set of variables such as satellite image types (e.g. MODIS, Landsat), particular geographic area, soil composition, vegetation cover, and generally the context (e.g. clouds, snow, and season). All these specific and variable conditions require flexible tools and applications to support an optimal search for the appropriate solutions, and high power computation resources. The research concerns with experiments on solutions of using the flexible and visual descriptions of the satellite image processing over distributed infrastructures (e.g. Grid, Cloud, and GPU clusters). This presentation highlights the Grid based implementation of the GreenLand application. The GreenLand application development is based on simple, but powerful, notions of mathematical operators and workflows that are used in distributed and parallel executions over the Grid infrastructure. Currently it is used in three major case studies concerning with Istanbul geographical area, Rioni River in Georgia, and Black Sea catchment region. The GreenLand application offers a friendly user interface for viewing and editing workflows and operators. The description involves the basic operators provided by GRASS [1] library as well as many other image related operators supported by the ESIP platform [2]. The processing workflows are represented as directed graphs giving the user a fast and easy way to describe complex parallel algorithms, without having any prior knowledge of any programming language or application commands

  13. On the joint distribution of excursion duration and amplitude of a narrow-band Gaussian process

    DEFF Research Database (Denmark)

    Ghane, Mahdi; Gao, Zhen; Blanke, Mogens

    2018-01-01

    of amplitude and period are limited to excursion through a mean-level or to describe the asymptotic behavior of high level excursions. This paper extends the knowledge by presenting a theoretical derivation of probability of wave exceedance amplitude and duration, for a narrow-band Gaussian process......The probability density of crest amplitude and of duration of exceeding a given level are used in many theoretical and practical problems in engineering. The joint density is essential for design of constructions that are subjected to waves and wind. The presently available joint distributions...... distribution, as expected, and that the marginal distribution of excursion duration works both for asymptotic and non-asymptotic cases. The suggested model is found to be a good replacement for the empirical distributions that are widely used. Results from simulations of narrow-band Gaussian processes, real...

  14. Using Java for distributed computing in the Gaia satellite data processing

    Science.gov (United States)

    O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose

    2011-10-01

    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.

  15. Facilitated transport of Cr(III) through activated composite membrane containing di-(2-ethylhexyl)phosphoric acid (DEHPA) as carrier agent

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Gulsin [Department of Chemistry, Selcuk University, 42031, Campus, Konya (Turkey); Tor, Ali, E-mail: ator@selcuk.edu.tr [Department of Environmental Engineering, Selcuk University, 42031 Campus, Konya (Turkey); Cengeloglu, Yunus; Ersoz, Mustafa [Department of Chemistry, Selcuk University, 42031, Campus, Konya (Turkey)

    2009-06-15

    The facilitated transport of chromium(III) through activated composite membrane (ACM) containing di-(2-ethylhexyl) phosphoric acid (DEHPA) was investigated. DEHPA was immobilised by interfacial polymerisation on polysulfone layer which was deposited on non-woven fabric by using spin coater. Then, ACM was characterised by using scanning electron microscopy (SEM), contact angle measurements and atomic force microscopy (AFM). Initially, batch experiments of liquid-liquid distribution of Cr(III) and the extractant (DEHPA) were carried out to determine the appropriate pH of the feed phase and the results showed that maximum extraction of Cr(III) was achieved at a pH of 4. It was also found that Cr(III) and DEHPA reacted in 1/1 molar ratio. The effects of Cr(III) (in feed phase), HCl (in stripping phase) and DEHPA (in ACM) concentrations were investigated. DEHPA concentration varies from 0.1 to 1.0 M and it was determined that the transport of Cr(III) increased with the carrier concentration up to 0.8 M. It was also observed that the transport of Cr(III) through the ACM tended to increase with Cr(III) and HCl concentrations. The stability of ACM was also confirmed with replicate experiments.

  16. Facilitated transport of Cr(III) through activated composite membrane containing di-(2-ethylhexyl)phosphoric acid (DEHPA) as carrier agent

    International Nuclear Information System (INIS)

    Arslan, Gulsin; Tor, Ali; Cengeloglu, Yunus; Ersoz, Mustafa

    2009-01-01

    The facilitated transport of chromium(III) through activated composite membrane (ACM) containing di-(2-ethylhexyl) phosphoric acid (DEHPA) was investigated. DEHPA was immobilised by interfacial polymerisation on polysulfone layer which was deposited on non-woven fabric by using spin coater. Then, ACM was characterised by using scanning electron microscopy (SEM), contact angle measurements and atomic force microscopy (AFM). Initially, batch experiments of liquid-liquid distribution of Cr(III) and the extractant (DEHPA) were carried out to determine the appropriate pH of the feed phase and the results showed that maximum extraction of Cr(III) was achieved at a pH of 4. It was also found that Cr(III) and DEHPA reacted in 1/1 molar ratio. The effects of Cr(III) (in feed phase), HCl (in stripping phase) and DEHPA (in ACM) concentrations were investigated. DEHPA concentration varies from 0.1 to 1.0 M and it was determined that the transport of Cr(III) increased with the carrier concentration up to 0.8 M. It was also observed that the transport of Cr(III) through the ACM tended to increase with Cr(III) and HCl concentrations. The stability of ACM was also confirmed with replicate experiments.

  17. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  18. The complete information for phenomenal distributed parameter control of multicomponent chemical processes in gas, fluid and solid phase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    A constitutive mathematical model of distributed parameters of multicomponent chemical processes in gas, fluid and solid phase is utilized to the realization of phenomenal distributed parameter control of these processes. Original systems of partial differential constitutive state equations, in the following derivative forms /I/, /II/ and /III/ are solved in this paper from the point of view of information for phenomenal distributed parameter control of considered processes. Obtained in this way for multicomponent chemical processes in gas, fluid and solid phase: -dynamical working space-time characteristics/analytical solutions in working space-time of chemical reactors/, -dynamical phenomenal Green functions as working space-time transfer functions, -statical working space characteristics /analytical solutions in working space of chemical reactors/, -statical phenomenal Green functions as working space transfer functions, are applied, as information for realization of constitutive distributed parameter control of mass, energy and momentum aspects of above processes. Two cases are considered by existence of: A/sup o/ - initial conditions, B/sup o/ - initial and boundary conditions, for multicomponent chemical processes in gas, fluid and solid phase

  19. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    Science.gov (United States)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  20. THE FEATURES OF LASER EMISSION ENERGY DISTRIBUTION AT MATHEMATIC MODELING OF WORKING PROCESS

    Directory of Open Access Journals (Sweden)

    A. M. Avsiyevich

    2013-01-01

    Full Text Available The space laser emission energy distribution of different continuous operation settings depends from many factors, first on the settings design. For more accurate describing of multimode laser emission energy distribution intensity the experimental and theoretic model, which based on experimental laser emission distribution shift presentation with given accuracy rating in superposition basic function form, is proposed. This model provides the approximation error only 2,2 percent as compared with 24,6 % and 61 % for uniform and Gauss approximation accordingly. The proposed model usage lets more accurate take into consideration the laser emission and working surface interaction peculiarity, increases temperature fields calculation accuracy for mathematic modeling of laser treatment processes. The method of experimental laser emission energy distribution studying for given source and mathematic apparatus for calculation of laser emission energy distribution intensity parameters depended from the distance in radial direction on surface heating zone are shown.

  1. Radial transport processes as a precursor to particle deposition in drinking water distribution systems.

    Science.gov (United States)

    van Thienen, P; Vreeburg, J H G; Blokker, E J M

    2011-02-01

    Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.

  2. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    Science.gov (United States)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  3. Innovation as a distributed, collaborative process of knowledge generation: open, networked innovation

    NARCIS (Netherlands)

    Sloep, Peter

    2009-01-01

    Sloep, P. B. (2009). Innovation as a distributed, collaborative process of knowledge generation: open, networked innovation. In V. Hornung-Prähauser & M. Luckmann (Eds.), Kreativität und Innovationskompetenz im digitalen Netz - Creativity and Innovation Competencies in the Web, Sammlung von

  4. Electron pulsed beam induced processing of thin film surface by Nb3Ge deposited into a stainless steel tape

    International Nuclear Information System (INIS)

    Vavra, I.; Korenev, S.A.

    1988-01-01

    A surface of superconductive thin film of Nb 3 Ge deposited onto a stainless steel tape was processed using the electron beam technique. The electron beam used had the following parameters: beam current density from 400 to 1000 A/cm 2 ; beam energy 100 keV; beam impulse length 300 ns. By theoretical analysis it is shown that the heating of film surface is an adiabatic process. It corresponds to our experimental data and pictures showing a surface remelting due to electron beam influence. After beam processing the superconductive parameters of the film remain unchanged. Roentgenograms have been analysed of Nb 3 Ge film surface recrystallized due to electron beam influence

  5. Novel scaling of the multiplicity distributions in the sequential fragmentation process and in the percolation

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A novel scaling of the multiplicity distributions is found in the shattering phase of the sequential fragmentation process with inhibition. The same scaling law is shown to hold in the percolation process. (author)

  6. Distributed genetic process mining

    NARCIS (Netherlands)

    Bratosin, C.C.; Sidorova, N.; Aalst, van der W.M.P.

    2010-01-01

    Process mining aims at discovering process models from data logs in order to offer insight into the real use of information systems. Most of the existing process mining algorithms fail to discover complex constructs or have problems dealing with noise and infrequent behavior. The genetic process

  7. Do adaptive comanagement processes lead to adaptive comanagement outcomes? A multicase study of long-term outcomes associated with the national riparian service team's place-based riparian assistance

    Science.gov (United States)

    Jill A. Smedstad; Hannah. Gosnell

    2013-01-01

    Adaptive comanagement (ACM) is a novel approach to environmental governance that combines the dynamic learning features of adaptive management with the linking and network features of collaborative management. There is growing interest in the potential for ACM to resolve conflicts around natural resource management and contribute to greater social and ecological...

  8. Record critical current densities in IG processed bulk YBa{sub 2}Cu{sub 3}O{sub y} fabricated using ball-milled Y{sub 2}Ba{sub 1}Cu{sub 1}O{sub 5} phase

    Energy Technology Data Exchange (ETDEWEB)

    Muralidhar, Miryala; Kenta, Nakazato; Murakami, Masato [Department of Materials Science and Engineering, Superconducting Materials Laboratory, Shibaura Institute of Technology, Tokyo (Japan); Zeng, XianLin; Koblischka, Michael R. [Institute of Experimental Physics, Saarland University, Saarbruecken (Germany); Diko, Pavel [Institute of Experimental Physics, Material Physics Laboratory, Slovak Academy of Sciences, Kosice (Slovakia)

    2016-02-15

    The infiltration-growth (IG) technique enables the uniform and controllable Y{sub 2}BaCuO{sub 5} (Y211) secondary phase particles formation within the YBa{sub 2}Cu{sub 3}O{sub y} (Y123) matrix. Recent results clarified that the flux pinning performance of the Y123 material was dramatically improved by optimizing the processing conditions during the IG process. In this paper, we adapted the IG technique and produced several samples with addition of nanometer-sized Y211 secondary phase particles, which were produced by a ball milling technique. We found that the performance of the IG processed Y123 material dramatically improved in the low field region for a ball milling time of 12 h as compared to the samples without a ball milling step. Magnetization measurements showed a sharp superconducting transition with an onset T{sub c} at around 92 K. The critical current density (J{sub c}) at 77 K and zero field was determined to be 224 022 Acm{sup -2}, which is higher than the not ball-milled sample. Furthermore, microstructural observations exhibited a uniform microstructure with homogenous distribution of nanosized Y-211 inclusions within the Y-123 matrix. The improved performance of the Y-123 material can be understood in terms of fine distribution of the secondary phases. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)

    Science.gov (United States)

    Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.

    2007-01-01

    This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.

  10. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  11. Proposed Expansion of Acme Landfill Operations.

    Science.gov (United States)

    1982-08-01

    hazardous waste ponds that use solar evaporation processes to dispose of liquid hazardous wastes have an indefinite life, the quantity of liquid that may...determined at a later date. The use of solar evaporation ponds, for example, would preclude the use of spreading and compaction equipment used for...pesticides in spray cans, residual chemical solvents in steel drums, herbicide residues on grass clippings, or organic wastes in disposable baby diapers , a

  12. Core power distribution measurement and data processing in Daya Bay Nuclear Power Station

    International Nuclear Information System (INIS)

    Zhang Hong

    1997-01-01

    For the first time in China, Daya Bay Nuclear Power Station applied the advanced technology of worldwide commercial pressurized reactors to the in-core detectors, the leading excore six-chamber instrumentation for precise axial power distribution, and the related data processing. Described in this article are the neutron flux measurement in Daya Bay Nuclear Power Station, and the detailed data processing

  13. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    Directory of Open Access Journals (Sweden)

    Armando Freitas da Rocha

    2015-01-01

    Full Text Available Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei provided by each electrode of the 10/20 system about the identified si. H(ei Principal Component Analysis (PCA was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.

  14. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    Science.gov (United States)

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  15. Microstructure and Mechanical Properties of Porous Mullite

    Science.gov (United States)

    Hsiung, Chwan-Hai Harold

    two doped intergranular glasses and their interfaces with mullite were quite similar. The reductions in strength and toughness were traced to differences in the ACM network structure and mass-distribution that are hypothesized to result from dopant-altered ACM nucleation and growth kinetics. X-ray computed tomography, a non-destructive 3-D imaging technique, played a key role in this work, enabling the measurement of needle diameters, quantification of the ACM structural network, and finite element analysis of ACM's mechanical response.

  16. The spatial distribution of microfabric around gravel grains: indicator of till formation processes

    Science.gov (United States)

    KalväNs, Andis; Saks, Tomas

    2010-05-01

    Till micromorphology studies in thin sections is an established tool in the field of glacial geology. Often the thin sections are inspected only visually with help of mineralogical microscope. This can lead to subjective interpretation of observed structures. More objective method used in till micromorphology is measurement of apparent microfabric, usually seen as preferred orientation of elongated sand grains. In theses studies only small fraction of elongated sand grains often confined to small area of thin section usually are measured. We present a method for automated measurement of almost all elongated sand grains across the full area of the thin section. Apparently elongated sand grains are measured using simple image analysis tools, the data are processed in a way similar to regular till fabric data and visualised as a grid of rose diagrams. The method allows to draw statistical information about spatial variation of microfabric preferred orientation and fabric strength with resolution as fine as 1 mm. Late Weichselian tills from several sites in Western Latvia were studied and large variations in fabric strength and spatial distribution were observed in macroscopically similar till units. The observed types of microfabric spatial distributions include strong, monomodal and uniform distribution; weak and highly variable in small distances distribution; consistently bimodal distribution and domain-like pattern of preferred sand grain orientation. We suggest that the method can be readily used to identify the basic deformation and sedimentation processes active during the final stages of till formation. It is understood that the microfabric orientation will be significant affected by nearby large particles. The till is highly heterogonous sediment and the source of microfabric perturbations observed in thin section might lie outside the section plane. Therefore we suggest that microfabric distribution around visible sources of perturbation - gravel grains cut

  17. In situ observation of electron-beam-induced dewetting of CdSe thin film embedded in SiO2

    DEFF Research Database (Denmark)

    Fabrim, Zacarias Eduardo; Kjelstrup-Hansen, Jakob; Fichtner, Paulo F. P.

    In this work we show the dewetting process of the CdSe thin films induced by electron beam irradiation. A multilayer heterostructure of SiO2/CdSe/SiO2 was made by a magnetron sputtering process. A plan-view (PV) sample was irradiated with 200 kV electrons in the TEM with two current densities: 0.......33 A.cm2 and 1.0 A.cm2 and at 80 kV with 0.37 A.cm2. The dewetting of the CdSe film is inferred by a number of micrographs taken during the irradiation. The microstructural changes were analyzed under the assumption of being induced by ballistic collision effects in the absence of sample heating....

  18. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  19. GLN standard as a facilitator of physical location identification within process of distribution

    Directory of Open Access Journals (Sweden)

    Davor Dujak

    2017-09-01

    Full Text Available Background: Distribution, from the business point of view, is a set of decisions and actions that will provide the right products at the right time and place, in line with customer expectations. It is a process that generates significant cost, but also effectively implemented, significantly affects the positive perception of the company. Institute of Logistics and Warehousing (IliM, based on the research results related to the optimization of the distribution network and consulting projects for companies, indicates the high importance of the correct description of the physical location within the supply chains in order to make transport processes more effective. Individual companies work on their own geocoding of warehouse locations and location of their business partners (suppliers, customers, but the lack of standardization in this area causes delays related to delivery problems with reaching the right destination. Furthermore, the cooperating companies do not have a precise indication of the operating conditions of each location, e.g. Time windows of the plant, logistic units accepted at parties, supported transport etc. Lack of this information generates additional costs associated with re-operation and the costs of lost benefits for the lack of goods on time. The solution to this problem seems to be a wide-scale implementation of GS1 standard which is the Global Location Number (GLN, that, thanks to a broad base of information will assist the distribution processes. Material and methods: The results of survey conducted among Polish companies in the second half of 2016 indicate an unsatisfactory degree of implementation of the transport processes, resulting from incorrect or inaccurate description of the location, and thus, a significant number of errors in deliveries. Accordingly, authors studied literature and examined case studies indicating the possibility of using GLN standard to identify the physical location and to show the

  20. Comparison of the depth distribution processes for 137Cs and 210Pbex in cultivated soils

    International Nuclear Information System (INIS)

    Zhang Yunqi; Zhang Xinbao; Long Yi; He Xiubin; Yu Xingxiu

    2012-01-01

    This paper focuses on the different processes of 137 Cs and 210 Pb ex depth distribution in cultivated soils. In view of their different fallout deposition processes, considering radionuclide will diffuse from the plough layer to the plough pan layer duo to the concentration gradient between the two layers, the 137 Cs and 210 Pb ex depth distribution processes were theoretically derived. Additionally, the theoretical derivation was verified by the measured 137 Cs and 210 Pb ex values in the soil core collected from wheat field in Fujianzhuang, Shanxi Province, China, and the 137 Cs and 210 Pb ex concentrations variation with depth in soils of the wheat field was explained rationally. The 137 Cs depth distribution state in cultivated soils will consistently vary with time due to 137 Cs continual decay and diffusion as an artificial radionuclide without sustainable fallout input since 1960s. In contrast, the 210 Pb ex depth distribution in cultivated soils will achieve steady state because of sustainable deposition of the naturally occurring 210 Pb ex fallout, and it can be concluded that the differences between the theoretical and the measured values, especially for 210 Pb ex , might be associated with the history of plough depth variation or LUCC. (authors)

  1. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution.

    Science.gov (United States)

    Colen, Hadewig B; Neef, Cees; Schuring, Roel W

    2003-06-01

    Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.

  2. The Particle Distribution in Liquid Metal with Ceramic Particles Mould Filling Process

    Science.gov (United States)

    Dong, Qi; Xing, Shu-ming

    2017-09-01

    Adding ceramic particles in the plate hammer is an effective method to increase the wear resistance of the hammer. The liquid phase method is based on the “with the flow of mixed liquid forging composite preparation of ZTA ceramic particle reinforced high chromium cast iron hammer. Preparation method for this system is using CFD simulation analysis the particles distribution of flow mixing and filling process. Taking the 30% volume fraction of ZTA ceramic composite of high chromium cast iron hammer as example, by changing the speed of liquid metal viscosity to control and make reasonable predictions of particles distribution before solidification.

  3. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    Science.gov (United States)

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  4. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  5. Secured Session-key Distribution using control Vector Encryption / Decryption Process

    International Nuclear Information System (INIS)

    Ismail Jabiullah, M.; Abdullah Al-Shamim; Khaleqdad Khan, ANM; Lutfar Rahman, M.

    2006-01-01

    Frequent key changes are very much desirable for the secret communications and are thus in high demand. A session-key distribution technique has been designed and implemented using the programming language C on which the communication between the end-users is encrypted is used for the duration of a logical connection. Each session-key is obtained from the key distribution center (KDC) over the same networking facilities used for end-user communication. The control vector is cryptographically coupled with the session-key at the time of key generation in the KDC. For this, the generated hash function, master key and the session-key are used for producing the encrypted session-key, which has to be transferred. All the operations have been performed using the C programming language. This process can be widely applicable to all sorts of electronic transactions online or offline; commercially and academically.(authors)

  6. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  7. Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition

    Science.gov (United States)

    Rogers, Timothy T.; McClelland, James L.

    2014-01-01

    This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…

  8. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  9. Strange quark distribution and parton charge symmetry violation in a semi-inclusive process

    International Nuclear Information System (INIS)

    Kitagawa, Hisashi; Sakemi, Yasuhiro

    2000-01-01

    It is possible to observe a semi-inclusive reaction with tagged charged kaons using the RICH detector at DESY-HERA. Using the semi-inclusive process we study two kinds of parton properties in the nucleon. We study relations between cross sections and strange quark distributions, which are expected to be measured more precisely in such a process than in the process in which pions are tagged. We also investigate charge symmetry violation (CSV) in the nucleon, which appears in the region x ≤ 0.1. (author)

  10. Distributed Kernelized Locality-Sensitive Hashing for Faster Image Based Navigation

    Science.gov (United States)

    2015-03-26

    using MapReduce was described by Moise et al. in their paper, “Indexing and searching 100 million images with map-reduce” [9]. They modified the...D. Moise , D. Shestakov, G. Gudmundsson, and L. Amsaleg, “Indexing and Searching 100M Images with Map-Reduce,” in Proceedings of the 3rd ACM con

  11. Active contour modes Crisp: new technique for segmentation of the lungs in CT images

    International Nuclear Information System (INIS)

    Reboucas Filho, Pedro Pedrosa; Cortez, Paulo Cesar; Holanda, Marcelo Alcantara

    2011-01-01

    This paper proposes a new active contour model (ACM), called ACM Crisp, and evaluates the segmentation of lungs in computed tomography (CT) images. An ACM draws a curve around or within the object of interest. This curve changes its shape, when some energy acts on it and moves towards the edges of the object. This process is performed by successive iterations of minimization of a given energy, associated with the curve. The ACMs described in the literature have limitations when used for segmentations of CT lung images. The ACM Crisp model overcomes these limitations, since it proposes automatic initiation and new external energy based on rules and radiological pulmonary densities. The paper compares other ACMs with the proposed method, which is shown to be superior. In order to validate the algorithm a medical expert in the field of Pulmonology of the Walter Cantidio University Hospital from the Federal University of Ceara carried out a qualitative analysis. In these analyses 100 CT lung images were used. The segmentation efficiency was evaluated into 5 categories with the following results for the ACM Crisp: 73% excellent, without errors, 20% acceptable, with small errors, and 7% reasonable, with large errors, 0% poor, covering only a small part of the lung, and 0% very bad, making a totally incorrect segmentation. In conclusion the ACM Crisp is considered a useful algorithm to segment CT lung images, and with potential to integrate medical diagnosis systems. (author)

  12. Acquiring and processing verb argument structure: distributional learning in a miniature language.

    Science.gov (United States)

    Wonnacott, Elizabeth; Newport, Elissa L; Tanenhaus, Michael K

    2008-05-01

    Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.

  13. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  14. s-process studies in the light of new experimental cross sections: Distribution of neutron fluences and r-process residuals

    International Nuclear Information System (INIS)

    Kaeppeler, F.; Beer, H.; Wisshak, K.; Clayton, D.D.; Macklin, R.L.; Ward, R.A.

    1981-08-01

    A best set of neutron-capture cross sections has been evaluated for the most important s-process isotopes. With this data base, s-process studies have been carried out using the traditional model which assumes a steady neutron flux and an exponential distribution of neutron irradiations. The calculated sigmaN-curve is in excellent agreement with the empirical sigmaN-values of pure s-process nuclei. Simultaneously, good agreement is found between the difference of solar and s-process abundances and the abundances of pure r-process nuclei. We also discuss the abundance pattern of the iron group elements where our s-process results complement the abundances obtained from explosive nuclear burning. The results obtained from the traditional s-process model such as seed abundances, mean neutron irradiations, or neutron densities are compared to recent stellar model calculations which assume the He-burning shells of red giant stars as the site for the s-process. (orig.) [de

  15. Charged particle multiplicity distributions in e+e--annihilation processes in the LEP experiments

    International Nuclear Information System (INIS)

    Shlyapnikov, P.V.

    1992-01-01

    Results of studies of the charged particle multiplicity distributions in the process of e + e - -annihilation into hadrons obtained in experiments at LEP accelerator in CERN are reviewed. Universality in energy dependence of the average charged particle multiplicity in e + e - and p ± p collisions, evidence for KNO-scaling in e + e - data, structure in multiplicity distribution and its relation to the jet structure of events, average particle multiplicities or quark and gluon jets, 'clan' picture and other topics are discussed. 73 refs.; 20 figs.; 3 tabs

  16. Effect of process parameters on temperature distribution in twin-electrode TIG coupling arc

    Science.gov (United States)

    Zhang, Guangjun; Xiong, Jun; Gao, Hongming; Wu, Lin

    2012-10-01

    The twin-electrode TIG coupling arc is a new type of welding heat source, which is generated in a single welding torch that has two tungsten electrodes insulated from each other. This paper aims at determining the distribution of temperature for the coupling arc using the Fowler-Milne method under the assumption of local thermodynamic equilibrium. The influences of welding current, arc length, and distance between both electrode tips on temperature distribution of the coupling arc were analyzed. Based on the results, a better understanding of the twin-electrode TIG welding process was obtained.

  17. Development of laboratory and process sensors to monitor particle size distribution of industrial slurries

    Energy Technology Data Exchange (ETDEWEB)

    Pendse, H.P.

    1992-10-01

    In this paper we present a novel measurement technique for monitoring particle size distributions of industrial colloidal slurries based on ultrasonic spectroscopy and mathematical deconvolution. An on-line sensor prototype has been developed and tested extensively in laboratory and production settings using mineral pigment slurries. Evaluation to date shows that the sensor is capable of providing particle size distributions, without any assumptions regarding their functional form, over diameters ranging from 0.1 to 100 micrometers in slurries with particle concentrations of 10 to 50 volume percents. The newly developed on-line sensor allows one to obtain particle size distributions of commonly encountered inorganic pigment slurries under industrial processing conditions without dilution.

  18. The study of conjugation of anti-CD20 monoclonal antibody for labeling with metallic or lanthanides radionuclides

    International Nuclear Information System (INIS)

    Akanji, Akinkunmi Ganiyu

    2012-01-01

    Lymphomas are malignancies or cancers that start from the malign transformation of a lymphocyte in the lymphatic system. Generally, lymphomas start from the lymph nodes or from the agglomeration of the lymphatic tissues, organs like stomach, intestines, in some cases it can involve the bone marrow and the blood, it can also disseminate to other organs. Lymphomas are divided in two major categories: Hodgkin lymphoma and non-Hodgkin lymphoma (NHL). Patient with NHL are generally treated with radiotherapy alone or combined with immunotherapy using monoclonal antibody rituximab (MabThera®). Currently, monoclonal antibodies (Acm) conjugated with bifunctional chelate agents and radiolabeled with metallic or lanthanides radionuclides are a treatment reality for patients with NHL by the principle of radioimmunotherapy (RIT). This study focused on the conditions of conjugation of Acm rituximab (MabThera®) with bifunctional chelating agents DOTA and DTPA. Various parameters were studied: method of Acm purification, conditions of Acm conjugation, the method for determination of number of chelate agent coupled to the Acm, method for purification of the conjugated antibody Acm, conditions of labeling of the conjugated antibody with lutetium-177, method of purification of the radiolabeled immuno conjugate, method of radiochemical purity (RP), specific binding in vitro Raji cells (Human Burkitt) and biological distribution performed in normal Balb-c mouse. The three methodologies employed in pre-purification of Acm (dialysis, size exclusion chromatograph and dial filtration) demonstrated to be efficient; they provided sample recovery exceeding 90%. However, the methodology of dial filtration presents minimal sample loss, and gave the final recovery of the sample in micro liters; thereby facilitating sample use in subsequent experiments. Numbers of chelators attached to the Acm molecule was proportional to the molar ratio studied. When we evaluated the influence of different

  19. The study of conjugation of anti-CD20 monoclonal antibody for labeling with metallic or lanthanides radionuclides; Estudo de conjugacao do anticorpo anti-CD20 para marcacao com radionuclideos metalicos ou lantanideos

    Energy Technology Data Exchange (ETDEWEB)

    Akanji, Akinkunmi Ganiyu

    2012-07-01

    Lymphomas are malignancies or cancers that start from the malign transformation of a lymphocyte in the lymphatic system. Generally, lymphomas start from the lymph nodes or from the agglomeration of the lymphatic tissues, organs like stomach, intestines, in some cases it can involve the bone marrow and the blood, it can also disseminate to other organs. Lymphomas are divided in two major categories: Hodgkin lymphoma and non-Hodgkin lymphoma (NHL). Patient with NHL are generally treated with radiotherapy alone or combined with immunotherapy using monoclonal antibody rituximab (MabThera Registered-Sign ). Currently, monoclonal antibodies (Acm) conjugated with bifunctional chelate agents and radiolabeled with metallic or lanthanides radionuclides are a treatment reality for patients with NHL by the principle of radioimmunotherapy (RIT). This study focused on the conditions of conjugation of Acm rituximab (MabThera Registered-Sign ) with bifunctional chelating agents DOTA and DTPA. Various parameters were studied: method of Acm purification, conditions of Acm conjugation, the method for determination of number of chelate agent coupled to the Acm, method for purification of the conjugated antibody Acm, conditions of labeling of the conjugated antibody with lutetium-177, method of purification of the radiolabeled immuno conjugate, method of radiochemical purity (RP), specific binding in vitro Raji cells (Human Burkitt) and biological distribution performed in normal Balb-c mouse. The three methodologies employed in pre-purification of Acm (dialysis, size exclusion chromatograph and dial filtration) demonstrated to be efficient; they provided sample recovery exceeding 90%. However, the methodology of dial filtration presents minimal sample loss, and gave the final recovery of the sample in micro liters; thereby facilitating sample use in subsequent experiments. Numbers of chelators attached to the Acm molecule was proportional to the molar ratio studied. When we evaluated

  20. A convergent model for distributed processing of Big Sensor Data in urban engineering networks

    Science.gov (United States)

    Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.

    2017-01-01

    The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.

  1. Participation of the arcRACME protein in self-activation of the arc operon located in the arginine catabolism mobile element in pandemic clone USA300.

    Science.gov (United States)

    Rozo, Zayda Lorena Corredor; Márquez-Ortiz, Ricaurte Alejandro; Castro, Betsy Esperanza; Gómez, Natasha Vanegas; Escobar-Pérez, Javier

    2017-07-01

    Staphylococcus aureus pandemic clone USA300 has, in addition to its constitutive arginine catabolism (arc) gene cluster, an arginine catabolism mobile element (ACME) carrying another such cluster, which gives this clone advantages in colonisation and infection. Gene arcR, which encodes an oxygen-sensitive transcriptional regulator, is inside ACME and downstream of the constitutive arc gene cluster, and this situation may have an impact on its activation. Different relative expression behaviours are proven here for arcRACME and the arcACME operon compared to the constitutive ones. We also show that the artificially expressed recombinant ArcRACME protein binds to the promoter region of the arcACME operon; this mechanism can be related to a positive feedback model, which may be responsible for increased anaerobic survival of the USA300 clone during infection-related processes.

  2. Standardization of a method to study the distribution of Americium in purex process

    International Nuclear Information System (INIS)

    Dapolikar, T.T.; Pant, D.K.; Kapur, H.N.; Kumar, Rajendra; Dubey, K.

    2017-01-01

    In the present work the distribution of Americium in PUREX process is investigated in various process streams. For this purpose a method has been standardized for the determination of Am in process samples. The method involves extraction of Am with associated actinides using 30% TRPO-NPH at 0.3M HNO 3 followed by selective stripping of Am from the organic phase into aqueous phase at 6M HNO 3 . The assay of aqueous phase for Am content is carried out by alpha radiometry. The investigation has revealed that 100% Am follows the HLLW route. (author)

  3. Effect of current on the microstructure and performance of (Bi2Te3)0.2(Sb2Te3)0.8 thermoelectric material via field activated and pressure assisted sintering

    International Nuclear Information System (INIS)

    Chen Ruixue; Meng Qingsen; Fan Wenhao; Wang Zhong

    2011-01-01

    (Bi 2 Te 3 ) 0.2 (Sb 2 Te 3 ) 0.8 thermoelectric material was sintered via a field activated and pressure assisted sintering (FAPAS) process. By applying different current intensity (0, 60, 320 A/cm 2 ) in the sintering process, the effects of electric current on the microstructure and thermoelectric performance were investigated. This demonstrated that the application of electric current in the sintering process could significantly improve the uniformity and density of (Bi 2 Te 3 ) 0.2 (Sb 2 Te 3 ) 0.8 samples. When the current intensity was raised to 320 A/cm 2 , the preferred orientation of grains was observed. Moreover, positive effects on the thermoelectric performance of applying electric current in the sintering process were also confirmed. An increase of 0.02 and 0.11 in the maximum figure of merit ZT value could be acquired by applying current of 60 and 320 A/cm 2 , respectively. (semiconductor materials)

  4. The process of developing distributed-efficacy and social practice in the context of ‘ending AIDS’

    Directory of Open Access Journals (Sweden)

    Christopher Burman

    2015-07-01

    Full Text Available Introduction: this article reflects on data that emanated from a programme evaluation and focuses on a concept we label ‘distributed-efficacy’. We argue that the process of developing and sustaining ‘distributed-efficacy’ is complex and indeterminate, thus difficult to manage or predict. We situate the discussion within the context of UNAIDS’ recent strategy — Vision 95:95:95 — to ‘end AIDS’ by 2030 which the South African National Department of Health is currently rolling out across the country. Method: A qualitative method was applied. It included a Value Network Analysis, the Most Significant Change technique and a thematic content analysis of factors associated with a ‘competent community’ model. During the analysis it was noticed that there were unexpected references to a shift in social relations. This prompted a re-analysis of the narrative findings using a second thematic content analysis that focused on factors associated with complexity science, the environmental sciences and shifts is social relations. Findings: the efficacy associated with new social practices relating to HIV risk-reduction was distributed amongst networks that included mother—son networks and participant—facilitator networks and included a shift in social relations within these networks. Discussion: it is suggested that for new social practices to emerge requires the establishment of ‘distributed-efficacy’ which facilitates localised social sanctioning, sometimes including shifts in social relations, and this process is a ‘complex’, dialectical interplay between ‘agency’ and ‘structure’. Conclusion: the ambition of ‘ending AIDS’ by 2030 represents a compressed timeframe that will require the uptake of multiple new bio-social practises. This will involve many nonlinear, complex challenges and the process of developing ‘distributed-efficacy’ could play a role in this process. Further research into the factors we

  5. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  6. Distributed control and monitoring of high-level trigger processes on the LHCb online farm

    CERN Document Server

    Vannerem, P; Jost, B; Neufeld, N

    2003-01-01

    The on-line data taking of the LHCb experiment at the future LHC collider will be controlled by a fully integrated and distributed Experiment Control System (ECS). The ECS will supervise both the detector operation (DCS) and the trigger and data acquisition (DAQ) activities of the experiment. These tasks require a large distributed information management system. The aim of this paper is to show how the control and monitoring of software processes such as trigger algorithms are integrated in the ECS of LHCb.

  7. Calculation of the spallation product distribution in the evaporation process

    International Nuclear Information System (INIS)

    Nishida, T.; Kanno, I.; Nakahara, Y.; Takada, H.

    1989-01-01

    Some investigations are performed for the calculational model of nuclear spallation reaction in the evaporation process. A new version of a spallation reaction simulation code NUCLEUS has been developed by incorporating the newly revised Uno ampersand Yamada's mass formula and extending the counting region of produced nuclei. The differences between the new and original mass formulas are shown in the comparisons of mass excess values. The distributions of spallation products of a uranium target nucleus bombarded by energy (0.38 - 2.9 GeV) protons have been calculated with the new and original versions of NUCLEUS. In the fission component Uno ampersand Yamada's mass formula reproduces the measured data obtained from thin foil experiments significantly better, especially in the neutron excess side, than the combination of the Cameron's mass formula and the mass table compiled by Wapstra, et al., in the original version of NUCLEUS. Discussions are also made on how the mass-yield distribution of products varies dependent on the level density parameter a characterizing the particle evaporation. 16 refs., 7 figs., 1 tab

  8. Calculation of the spallation product distribution in the evaporation process

    International Nuclear Information System (INIS)

    Nishida, T.; Kanno, I.; Nakahara, Y.; Takada, H.

    1989-01-01

    Some investigations are performed for the calculational model of nuclear spallation reaction in the evaporation process. A new version of a spallation reaction simulation code NUCLEUS has been developed by incorporating the newly revised Uno and Yamada's mass formula and extending the counting region of produced nuclei. The differences between the new and original mass formulas are shown in the comparisons of mass excess values. The distributions of spallation products of a uranium target nucleus bombarded by energy (0.38 - 2.9 GeV) protons have been calculated with the new and original versions of NUCLEUS. In the fission component Uno and Yamada's mass formula reproduces the measured data obtained from thin foil experiments significantly better, especially in the neutron excess side, than the combination of the Cameron's mass formula and the mass table compiled by Wapstra, et al., in the original version of NUCLEUS. Discussions are also made on how the mass-yield distribution of products varies dependent on the level density parameter α characterizing the particle evaporation. (author)

  9. Distributed Prognostic Health Management with Gaussian Process Regression

    Science.gov (United States)

    Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank

    2010-01-01

    Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.

  10. Estimating the transmission potential of supercritical processes based on the final size distribution of minor outbreaks.

    Science.gov (United States)

    Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J

    2012-02-07

    Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  11. Spatial Patterns in Distribution of Kimberlites: Relationship to Tectonic Processes and Lithosphere Structure

    DEFF Research Database (Denmark)

    Chemia, Zurab; Artemieva, Irina; Thybo, Hans

    2014-01-01

    of kimberlite melts through the lithospheric mantle, which forms the major pipe. Stage 2 (second-order process) begins when the major pipe splits into daughter sub-pipes (tree-like pattern) at crustal depths. We apply cluster analysis to the spatial distribution of all known kimberlite fields with the goal...

  12. Testing Mediators of Reduced Drinking for Veterans in Alcohol Care Management.

    Science.gov (United States)

    Moskal, Dezarie; Maisto, Stephen A; Possemato, Kyle; Lynch, Kevin G; Oslin, David W

    2018-03-26

    Alcohol Care Management (ACM) is a manualized treatment provided by behavioral health providers working in a primary care team aimed at increasing patients' treatment engagement and decreasing their alcohol use. Research has shown that ACM is effective in reducing alcohol consumption; however, the mechanisms of ACM are unknown. Therefore, the purpose of this study is to examine the mechanisms of change in ACM in the context of a randomized clinical trial evaluating the effectiveness of ACM. This study performed secondary data analysis of existing data from a larger study that involved a sample of U.S. veterans (N = 163) who met criteria for current alcohol dependence. Upon enrollment into the study, participants were randomized to receive either ACM or standard care. ACM was delivered in-person or by telephone within the primary care clinic and focused on the use of oral naltrexone and manualized psychosocial support. According to theory, we hypothesized several ACM treatment components that would mediate alcohol consumption outcomes: engagement in addiction treatment, reduced craving, and increased readiness to change. Parallel mediation models were performed by the PROCESS macro Model 4 in SPSS to test study hypotheses. The institutional review boards at each of the participating facilities approved all study procedures before data collection. As hypothesized, results showed that treatment engagement mediated the relation between treatment and both measures of alcohol consumption outcomes, the percentage of alcohol abstinent days, and the percentage of heavy drinking days. Neither craving nor readiness to change mediated the treatment effect on either alcohol consumption outcome. Findings suggest that ACM may be effective in changing drinking patterns partially due to an increase in treatment engagement. Future research may benefit from evaluating the specific factors that underlie increased treatment engagement. The current study provides evidence that alcohol

  13. Frequency distributions from birth, death, and creation processes.

    Science.gov (United States)

    Bartley, David L; Ogden, Trevor; Song, Ruiguang

    2002-01-01

    The time-dependent frequency distribution of groups of individuals versus group size was investigated within a continuum approximation, assuming a simplified individual growth, death and creation model. The analogy of the system to a physical fluid exhibiting both convection and diffusion was exploited in obtaining various solutions to the distribution equation. A general solution was approximated through the application of a Green's function. More specific exact solutions were also found to be useful. The solutions were continually checked against the continuum approximation through extensive simulation of the discrete system. Over limited ranges of group size, the frequency distributions were shown to closely exhibit a power-law dependence on group size, as found in many realizations of this type of system, ranging from colonies of mutated bacteria to the distribution of surnames in a given population. As an example, the modeled distributions were successfully fit to the distribution of surnames in several countries by adjusting the parameters specifying growth, death and creation rates.

  14. Coalescent Processes with Skewed Offspring Distributions and Nonequilibrium Demography.

    Science.gov (United States)

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Achaz, Guillaume; Jensen, Jeffrey D

    2018-01-01

    Nonequilibrium demography impacts coalescent genealogies leaving detectable, well-studied signatures of variation. However, similar genomic footprints are also expected under models of large reproductive skew, posing a serious problem when trying to make inference. Furthermore, current approaches consider only one of the two processes at a time, neglecting any genomic signal that could arise from their simultaneous effects, preventing the possibility of jointly inferring parameters relating to both offspring distribution and population history. Here, we develop an extended Moran model with exponential population growth, and demonstrate that the underlying ancestral process converges to a time-inhomogeneous psi-coalescent. However, by applying a nonlinear change of time scale-analogous to the Kingman coalescent-we find that the ancestral process can be rescaled to its time-homogeneous analog, allowing the process to be simulated quickly and efficiently. Furthermore, we derive analytical expressions for the expected site-frequency spectrum under the time-inhomogeneous psi-coalescent, and develop an approximate-likelihood framework for the joint estimation of the coalescent and growth parameters. By means of extensive simulation, we demonstrate that both can be estimated accurately from whole-genome data. In addition, not accounting for demography can lead to serious biases in the inferred coalescent model, with broad implications for genomic studies ranging from ecology to conservation biology. Finally, we use our method to analyze sequence data from Japanese sardine populations, and find evidence of high variation in individual reproductive success, but few signs of a recent demographic expansion. Copyright © 2018 by the Genetics Society of America.

  15. Real World Awareness in Distributed Organizations: A View on Informal Processes

    Directory of Open Access Journals (Sweden)

    Eldar Sultanow

    2011-06-01

    Full Text Available Geographically distributed development has consistently had to deal with the challenge of intense awareness extensively more than locally concentrated development. Awareness marks the state of being informed incorporated with an understanding of project-related activities, states or relationships of each individual employee within a given group as a whole. In multifarious offices, where social interaction is necessary in order to distribute and locate information together with experts, awareness becomes a concurrent process which amplifies the exigency of easy routes for staff to be able to access this information, deferred or decentralized, in a formalized and problem-oriented way. Although the subject of Awareness has immensely increased in importance, there is extensive disagreement about how this transparency can be conceptually and technically implemented [1]. This paper introduces a model in order to visualize and navigate this information in three tiers using semantic networks, GIS and Web3D.

  16. A Coordinated Initialization Process for the Distributed Space Exploration Simulation

    Science.gov (United States)

    Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David

    2007-01-01

    A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions

  17. Distribution flow: a general process in the top layer of water repellent soils

    NARCIS (Netherlands)

    Ritsema, C.J.; Dekker, L.W.

    1995-01-01

    Distribution flow is the process of water and solute flowing in a lateral direction over and through the very first millimetre or centimetre of the soil profile. A potassium bromide tracer was applied in two water-repellent sandy soils to follow the actual flow paths of water and solutes in the

  18. Scalable, Economical Fabrication Processes for Ultra-Compact Warm-White LEDs

    Energy Technology Data Exchange (ETDEWEB)

    Lowes, Ted [Cree, Inc., Durham, NC (United States)

    2016-01-31

    Conventional warm-white LED component fabrication consists of a large number of sequential steps which are required to incorporate electrical, mechanical, and optical functionality into the component. Each of these steps presents cost and yield challenges which multiply throughout the entire process. Although there has been significant progress in LED fabrication over the last decade, significant advances are needed to enable further reductions in cost per lumen while not sacrificing efficacy or color quality. Cree conducted a focused 18-month program to develop a new low-cost, high-efficiency light emitting diode (LED) architecture enabled by novel large-area parallel processing technologies, reduced number of fabrication steps, and minimized raw materials use. This new scheme is expected to enable ultra-compact LED components exhibiting simultaneously high efficacy and high color quality. By the end of the program, Cree fabricated warm-white LEDs with a room-temperature “instant on” efficacy of >135 lm/W at ~3500K and 90 CRI (when driven at the DOE baseline current density of 35 A/cm2). Cree modified the conventional LED fabrication process flow in a manner that is expected to translate into simultaneously high throughput and yield for ultra-compact packages. Building on its deep expertise in LED wafer fabrication, Cree developed these ultra-compact LEDs to have no compromises in color quality or efficacy compared to their conventional counterparts. Despite their very small size, the LEDs will also be robustly electrically integrated into luminaire systems with the same attach yield as conventional packages. The versatility of the prototype high-efficacy LED architecture will likely benefit solid-state lighting (SSL) luminaire platforms ranging from bulbs to troffers. We anticipate that the prototype LEDs will particularly benefit luminaires with large numbers of distributed compact packages, such as linear and area luminaires (e.g. troffers). The fraction of

  19. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    Science.gov (United States)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  20. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    Science.gov (United States)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  1. Unitarity corrections in the pT distribution for the Drell-Yan process

    International Nuclear Information System (INIS)

    Betempts, M.A.; Gay Ducaty, M.B.; Machado, M.V.T.

    2001-01-01

    In this contribution we investigate the Drell-Yan transverse momentum distribution considering the color dipole approach, taking into account unitarity aspects in the dipole cross section. The process is analyzed in the current energies on pp collisions (√s = 62 GeV) and at LHC energies (√s = 8.8 TeV. The unitarity corrections are implemented through the multiple scattering Glauber-Mueller approach. (author)

  2. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    Science.gov (United States)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  3. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    Science.gov (United States)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  4. Quasi-stationary distributions for structured birth and death processes with mutations

    OpenAIRE

    Collet , Pierre; Martinez , Servet; Méléard , Sylvie; San Martin , Jaime

    2009-01-01

    39 pages; We study the probabilistic evolution of a birth and death continuous time measure-valued process with mutations and ecological interactions. The individuals are characterized by (phenotypic) traits that take values in a compact metric space. Each individual can die or generate a new individual. The birth and death rates may depend on the environment through the action of the whole population. The offspring can have the same trait or can mutate to a randomly distributed trait. We ass...

  5. Cumulative processes and quark distribution in nuclei

    International Nuclear Information System (INIS)

    Kondratyuk, L.; Shmatikov, M.

    1984-01-01

    Assuming existence of multiquark (mainly 12q) bags in nuclei the spectra of cumulative nucleons and mesons produced in high-energy particle-nucleus collisions are discussed. The exponential form of quark momentum distribution in 12q-bag (agreeing well with the experimental data on lepton-nucleus interactions at large q 2 ) is shown to result in quasi-exponential distribution of cumulative particles over the light-cone variable αsub(B). The dependence of f(αsub(B); psub(perpendicular)) (where psub(perpendicular) is the transverse momentum of the bag) upon psub(perpendicular) is considered. The yields of cumulative resonances as well as effects related to the u- and d-quark distributions in N > Z nuclei being different are dicscussed

  6. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Directory of Open Access Journals (Sweden)

    Ozlem Senvar

    2016-08-01

    Full Text Available Purpose: This study examines Clements’ Approach (CA, Box-Cox transformation (BCT, and Johnson transformation (JT methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI Ppu is handled for process capability analysis (PCA because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD, which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB and the Relative Root Mean Square Error (RRMSE are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations. Practical implications: Weibull distribution covers a wide class of non-normal processes due to its capability to yield a variety of distinct curves based on its parameters. Weibull distributions are known to have significantly different tail behaviors, which greatly affects the process capability. In quality and reliability applications, they are widely used for the analyses of failure data in order to understand how

  7. Distribution of radioactivity in the Esk Estuary and its relationship to sedimentary processes

    International Nuclear Information System (INIS)

    Kelly, M.; Emptage, M.

    1992-01-01

    In the Esk Estuary, Cumbria, the distribution of sediment lithology and facies has been determined and related to radionuclide surface and sub-surface distribution. The total volume of sediment contaminated with artificial radionuclides is estimated at 1.2 Mm 3 and the inventory of 137 Cs at 4.5 TBq. The fine grained sediments of the bank facies are the main reservoir for radionuclides, comprising 73% of the 137 Cs inventory. Time scales for the reworking of these sediments are estimated at tens to hundreds of years. Measurements of sediment and radionuclide deposition demonstrate that direct sediment deposition is the main method for radionuclide recruitment to the deposits but solution labelling can also occur. Bioturbation and other diagenetic processes modify the distribution of radionuclides in the deposits. Gamma dose rates in air can be related to the sediment grain size and sedimentation rate. (Author)

  8. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    International Nuclear Information System (INIS)

    Lacasa, Lucas

    2014-01-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein–Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments. (paper)

  9. Seeking inclusion in an exclusive process: discourses of medical school student selection.

    Science.gov (United States)

    Razack, Saleem; Hodges, Brian; Steinert, Yvonne; Maguire, Mary

    2015-01-01

    Calls to increase medical class representativeness to better reflect the diversity of society represent a growing international trend. There is an inherent tension between these calls and competitive student selection processes driven by academic achievement. How is this tension manifested? Our three-phase interdisciplinary research programme focused on the discourses of excellence, equity and diversity in the medical school selection process, as conveyed by key stakeholders: (i) institutions and regulatory bodies (the websites of 17 medical schools and 15 policy documents from national regulatory bodies); (ii) admissions committee members (ACMs) (according to semi-structured interviews [n = 9]), and (iii) successful applicants (according to semi-structured interviews [n = 14]). The work is theoretically situated within the works of Foucault, Bourdieu and Bakhtin. The conceptual framework is supplemented by critical hermeneutics and the performance theories of Goffman. Academic excellence discourses consistently predominate over discourses calling for greater representativeness in medical classes. Policy addressing demographic representativeness in medicine may unwittingly contribute to the reproduction of historical patterns of exclusion of under-represented groups. In ACM selection practices, another discursive tension is exposed as the inherent privilege in the process is marked, challenging the ideal of medicine as a meritocracy. Applicants' representations of self in the 'performance' of interviewing demonstrate implicit recognition of the power inherent in the act of selection and are manifested in the use of explicit strategies to 'fit in'. How can this critical discourse analysis inform improved inclusiveness in student selection? Policymakers addressing diversity and equity issues in medical school admissions should explicitly recognise the power dynamics at play between the profession and marginalised groups. For greater inclusion and to avoid one

  10. THE BERKELEY DATA ANALYSIS SYSTEM (BDAS): AN OPEN SOURCE PLATFORM FOR BIG DATA ANALYTICS

    Science.gov (United States)

    2017-09-01

    duplication and overlap, as well as variations in schemas, quality, and provenance. • Diversity of queries and users: The factors outlined above mean...Reliable Approximate Query Processing Systems. ACM SIGMOD, 2014. [30] Jiannan Wang, Sanjay Krishnan, Michael Franklin, Ken Goldberg , Tim Kraska, Tova... Goldberg . A Methodology for Learning, Analyzing, and Mitigating Social Influence Bias in Recommender Systems. ACM Conference on Recommender Systems

  11. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    Science.gov (United States)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the

  12. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  13. The distributed neural system for top-down letter processing: an fMRI study

    Science.gov (United States)

    Liu, Jiangang; Feng, Lu; Li, Ling; Tian, Jie

    2011-03-01

    This fMRI study used Psychophysiological interaction (PPI) to investigate top-down letter processing with an illusory letter detection task. After an initial training that became increasingly difficult, participant was instructed to detect a letter from pure noise images where there was actually no letter. Such experimental paradigm allowed for isolating top-down components of letter processing and minimizing the influence of bottom-up perceptual input. A distributed cortical network of top-down letter processing was identified by analyzing the functional connectivity patterns of letter-preferential area (LA) within the left fusiform gyrus. Such network extends from the visual cortex to high level cognitive cortexes, including the left middle frontal gyrus, left medial frontal gyrus, left superior parietal gyrus, bilateral precuneus, and left inferior occipital gyrus. These findings suggest that top-down letter processing contains not only regions for processing of letter phonology and appearance, but also those involved in internal information generation and maintenance, and attention and memory processing.

  14. Preparation and properties of high-Tc Bi-oxide superconductors

    International Nuclear Information System (INIS)

    Maeda, H.

    1989-01-01

    Bulk superconductors of Pb-doped Bi-oxide system (BSCCO) dominated with the high-Tc phase have the critical transition temperature, Tc of 110 K, and the upper critical fields, H c2 of 140 T at OK and 60 T at 77 K. A highly dense and a highly oriented microstructure is obtained by inserting an intermediate uniaxial pressing process. The critical current density, Jc at 77 K in zero field, Jc (77K,OT) of some 5000 A/cm 2 can be easily obtained by this process, and the field dependence of Jc is also improved. Flexible high-Tc BSCCO ribbons with a Jc (77K,Ot) of 1850 A/cm 2 have been successfully prepared by the combined process of doctor blade casting, cold rolling and sintering. Aq-sheeted multifilamentary wires with 1330 filaments and tapes with 30 filaments have also been successfully fabricated and the 36-filament tape shows a Jc(77K,OT) of 1050 A/cm 2 . (Auth.). 7 refs.; 7 figs

  15. Process and installation for producing tomographic images of the distribution of a radiotracer

    International Nuclear Information System (INIS)

    Fonroget, Jacques; Brunol, Jean.

    1977-01-01

    The invention particularly concerns a process for obtaining tomographic images of an object formed by a radiotracer distributed spacially over three dimensions. This process, using a detection device with an appreciably plane detection surface and at least one collimation orifice provided in a partition between the detection surface and the object, enables tomographic sections to be obtained with an excellent three-dimensional resolution of the images achieved. It is employed to advantage in an installation that includes a detection device or gamma camera on an appreciably plane surface, a device having a series of collimation apertures which may be used in succession, these holes being appreciably distributed over a common plane parallel to the detection surface, and a holder for the object. This holder can be moved in appreciably parallel translation to the common plane. The aim of this invention is, inter alia, to meet two requirements: localization in space and obtaining good contrasts. This aim is achieved by the fact that at least one tomographic image is obtained from a series of intermediate images of the object [fr

  16. Field percolation and high current density in 80/20 DyBa2Cu3O7-x/Dy2BaCuO5 bulk magnetically textured composite ceramics

    International Nuclear Information System (INIS)

    Cloots, R.; Liege Univ.; Dang, A.; Vanderbemden, P.; Vanderschueren, A.; Vanderschueren, H.W.; Bougrine, H.; Liege Univ.; Rulmont, A.; Ausloos, M.

    1996-01-01

    We measured the AC susceptibility of magnetically textured (123) 80%/211(20%) DyBaCuO composite in a special set-up in order to enhance the intergrain contribution. The synthesis process led to very clean weak links at grain boundaries. At the percolation threshold bulk shielding paths were such that the intergrain critical current density J C was above 10 5 A/cm 2 . The field dependence of J C was understood through an analytical form indicating a distribution of currents similar to the law of clusters at fracture/percolation thresholds. (orig.)

  17. Hydraulic experimental investigation on spatial distribution and formation process of tsunami deposit on a slope

    Science.gov (United States)

    Harada, K.; Takahashi, T.; Yamamoto, A.; Sakuraba, M.; Nojima, K.

    2017-12-01

    An important aim of the study of tsunami deposits is to estimate the characteristics of past tsunamis from the tsunami deposits found locally. Based on the tsunami characteristics estimated from tsunami deposit, it is possible to examine tsunami risk assessment in coastal areas. It is considered that tsunami deposits are formed based on the dynamic correlation between tsunami's hydraulic values, sediment particle size, topography, etc. However, it is currently not enough to evaluate the characteristics of tsunamis from tsunami deposits. This is considered to be one of the reasons that the understanding of the formation process of tsunami deposits is not sufficiently understood. In this study, we analyze the measurement results of hydraulic experiment (Yamamoto et al., 2016) and focus on the formation process and distribution of tsunami deposits. Hydraulic experiment was conducted with two-dimensional water channel with a slope. Tsunami was inputted as a bore wave flow. The moving floor section was installed as a seabed slope connecting to shoreline and grain size distribution was set some cases. The water level was measured using ultrasonic displacement gauges, and the flow velocity was measured using propeller current meters and an electromagnetic current meter. The water level and flow velocity was measured at some points. The distribution of tsunami deposit was measured from shoreline to run-up limit on the slope. Yamamoto et al. (2016) reported the measurement results on the distribution of tsunami deposit with wave height and sand grain size. Therefore, in this study, hydraulic analysis of tsunami sediment formation process was examined based on the measurement data. Time series fluctuation of hydraulic parameters such as Froude number, Shields number, Rouse number etc. was calculated to understand on the formation process of tsunami deposit. In the front part of the tsunami, the flow velocity take strong flow from shoreline to around the middle of slope. From

  18. An investigation of processes controlling the evolution of the boundary layer aerosol size distribution properties at the Swedish background station Aspvreten

    Directory of Open Access Journals (Sweden)

    P. Tunved

    2004-01-01

    Full Text Available Aerosol size distributions have been measured at the Swedish background station Aspvreten (58.8° N, 17.4° E. Different states of the aerosol were determined using a novel application of cluster analysis. The analysis resulted in eight different clusters capturing different stages of the aerosol lifecycle. The atmospheric aerosol size distributions were interpreted as belonging to fresh, intermediate and aged types of size distribution. With aid of back trajectory analysis we present statistics concerning the relation of source area and different meteorological parameters using a non-Lagrangian approach. Source area is argued to be important although not sufficient to describe the observed aerosol properties. Especially processing by clouds and precipitation is shown to be crucial for the evolution of the aerosol size distribution. As much as 60% of the observed size distributions present features that are likely to be related to cloud processes or wet deposition. The lifetime properties of different sized aerosols are discussed by means of measured variability of the aerosol size distribution. Processing by clouds and precipitation is shown to be especially crucial in the size range 100 nm and larger. This indicates an approximate limit for activation in clouds to 100 nm in this type of environment. The aerosol lifecycle is discussed. Size distributions indicating signs of recent new particle formation (~30% of the observed size distributions represent the first stage in the lifecycle. Aging of the aerosol size distribution may follow two branches: either growth by condensation and coagulation or processing by non-precipitating clouds. In both cases mass is accumulated. Wet removal is the main process capable of removing aerosol mass. Wet deposition is argued to be an important mechanism in reaching a state where nucleation may occur (i.e. sufficiently low aerosol surface area in environments similar to the one studied.

  19. Distributed Real-Time Embedded Video Processing

    National Research Council Canada - National Science Library

    Lv, Tiehan

    2004-01-01

    .... A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements...

  20. Configuration and supervision of advanced distributed data acquisition and processing systems for long pulse experiments using JINI technology

    International Nuclear Information System (INIS)

    Gonzalez, Joaquin; Ruiz, Mariano; Barrera, Eduardo; Lopez, Juan Manuel; de Arcas, Guillermo; Vega, Jesus

    2009-01-01

    The development of tools for managing the capabilities and functionalities of distributed data acquisition systems is essential in long pulse fusion experiments. The intelligent test and measurement system (ITMS) developed by UPM and CIEMAT is a technology that permits implementation of a scalable data acquisition and processing system based on PXI or CompactPCI hardware. Several applications based on JINI technology have been developed to enable use of this platform for extensive implementation of distributed data acquisition and processing systems. JINI provides a framework for developing service-oriented, distributed applications. The applications are based on the paradigm of a JINI federation that supports mechanisms for publication, discovering, subscription, and links to remote services. The model we implemented in the ITMS platform included services in the system CPU (SCPU) and peripheral CPUs (PCPUs). The resulting system demonstrated the following capabilities: (1) setup of the data acquisition and processing to apply to the signals, (2) information about the evolution of the data acquisition, (3) information about the applied data processing and (4) detection and distribution of the events detected by the ITMS software applications. With this approach, software applications running on the ITMS platform can be understood, from the perspective of their implementation details, as a set of dynamic, accessible, and transparent services. The search for services is performed using the publication and subscription mechanisms of the JINI specification. The configuration and supervision applications were developed using remotely accessible (LAN or WAN) objects. The consequence of this approach is a hardware and software architecture that provides a transparent model of remote configuration and supervision, and thereby a means to simplify the implementation of a distributed data acquisition system with scalable and dynamic local processing capability developed in a

  1. Configuration and supervision of advanced distributed data acquisition and processing systems for long pulse experiments using JINI technology

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Joaquin; Ruiz, Mariano [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Ctra. Valencia Km-7, 28031, Madrid (Spain); Barrera, Eduardo [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Ctra. Valencia Km-7, 28031, Madrid (Spain)], E-mail: eduardo.barrera@upm.es; Lopez, Juan Manuel; de Arcas, Guillermo [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid (UPM), Ctra. Valencia Km-7, 28031, Madrid (Spain); Vega, Jesus [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense 22, 28040, Madrid (Spain)

    2009-06-15

    The development of tools for managing the capabilities and functionalities of distributed data acquisition systems is essential in long pulse fusion experiments. The intelligent test and measurement system (ITMS) developed by UPM and CIEMAT is a technology that permits implementation of a scalable data acquisition and processing system based on PXI or CompactPCI hardware. Several applications based on JINI technology have been developed to enable use of this platform for extensive implementation of distributed data acquisition and processing systems. JINI provides a framework for developing service-oriented, distributed applications. The applications are based on the paradigm of a JINI federation that supports mechanisms for publication, discovering, subscription, and links to remote services. The model we implemented in the ITMS platform included services in the system CPU (SCPU) and peripheral CPUs (PCPUs). The resulting system demonstrated the following capabilities: (1) setup of the data acquisition and processing to apply to the signals, (2) information about the evolution of the data acquisition, (3) information about the applied data processing and (4) detection and distribution of the events detected by the ITMS software applications. With this approach, software applications running on the ITMS platform can be understood, from the perspective of their implementation details, as a set of dynamic, accessible, and transparent services. The search for services is performed using the publication and subscription mechanisms of the JINI specification. The configuration and supervision applications were developed using remotely accessible (LAN or WAN) objects. The consequence of this approach is a hardware and software architecture that provides a transparent model of remote configuration and supervision, and thereby a means to simplify the implementation of a distributed data acquisition system with scalable and dynamic local processing capability developed in a

  2. Research on distributed optical fiber sensing data processing method based on LabVIEW

    Science.gov (United States)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  3. Process and device for determining the spatial distribution of a radioactive substance

    International Nuclear Information System (INIS)

    1977-01-01

    This invention describes a process for determining the spatial distribution of a radioactive substance consisting in determining the positions and energy losses associated to the interactions of the Compton effect and the photoelectric interactions that occur owing to the emission of gamma photons by the radioactive material and in deducing an information on the spatial distribution of the radioactive substance, depending on the positions and energy losses associated to the interactions of the Compton effect of these gamma photons and the positions and energy losses associated to the subsequent photoelectric interactions of these same photons. The invention also concerns a processing system for identifying, among the signals representing the positions and energy losses of the interactions of the Compton effect and the photoelectric interactions of the gamma photons emitted by a radioactive source, those signals that are in keeping with the gamma photons that have been subjected to an initial interaction of the Compton effect and a second and last photoelectric interaction. It further concerns a system for determining, among the identified signals, the positions of the sources of several gamma photons. This detector of Compton interaction can be used with conventional Auger-type imaging system (gamma camera) for detecting photoelectric interactions [fr

  4. Single- versus dual-process models of lexical decision performance: insights from response time distributional analysis.

    Science.gov (United States)

    Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M

    2006-12-01

    This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.

  5. Lytr, a phage-derived amidase is most effective in induced lysis of Lactococcus lactis compared with other lactococcal amidases and glucosaminidases

    NARCIS (Netherlands)

    Steen, Anton; van Schalkwijk, Saskia; Buist, Girbe; Twigt, Marja; Szeliga, Monika; Meijer, Wilco; Kuipers, Oscar P.; Kok, Jan; Hugenholtz, Jeroen

    In the genome of Lactococcus lactis IL1403 five genes encoding peptidoglycan hydrolases are present: four glucosaminidases (acmA, acmB, acmC and acmD) and an endopeptidase (yjgB). Genes for six prophage lysins have also been identified. The genes acmB, acmC, acmD, yjgB and the lysin lytR of prophage

  6. DEVELOPMENT OF A HETEROGENIC DISTRIBUTED ENVIRONMENT FOR SPATIAL DATA PROCESSING USING CLOUD TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. S. Garov

    2016-06-01

    Full Text Available We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  7. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    Science.gov (United States)

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (geoportal"target="_blank">http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  8. General distributions in process algebra

    NARCIS (Netherlands)

    Katoen, Joost P.; d' Argenio, P.R.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    2001-01-01

    This paper is an informal tutorial on stochastic process algebras, i.e., process calculi where action occurrences may be subject to a delay that is governed by a (mostly continuous) random variable. Whereas most stochastic process algebras consider delays determined by negative exponential

  9. Orofacial clinical features in Arnold Chiari type I malformation: A case series.

    Science.gov (United States)

    de Arruda, José-Alcides; Figueiredo, Eugênia; Monteiro, João-Luiz; Barbosa, Livia-Mirelle; Rodrigues, Cleomar; Vasconcelos, Belmiro

    2018-04-01

    Arnold Chiari malformation (ACM) is characterized by an anatomical defect at the base of the skull where the cerebellum and the spinal cord herniate through the foramen magnum into the cervical spinal canal. Among the subtypes of the condition, ACM type I (ACM-I) is particularly outstanding because of the severity of symptoms. This study aimed to analyze the orofacial clinical manifestations of patients with ACM-I, and discuss their demographic distribution and clinical features in light of the literature. A case series with patients with ACM-I treated between 2012 and 2015 was described. The sample consisted of patients who were referred by the Department of Neurosurgery to the Oral and Maxillofacial Surgery Service of Hospital da Restauração in Brazil for the assessment of facial symptomatology. A questionnaire was applied to evaluate the presence of painful orofacial findings. Data are reported using descriptive statistical methods. Mean patient age was 39.3 years and the sample consisted mostly of male patients. A high prevalence of headache (50%) and pain in the neck (66.7%) and masticatory muscles (50%) was found. Only one patient reported difficulty in performing mandibular movements and two reported jaw clicking sounds. Mean mouth opening was 40.83 mm. ACM-I patients may exhibit orofacial symptoms which may mimic temporomandibular joint disorders. This study brings interesting information that could help clinicians and oral and maxillofacial surgeons to understand this uncommon condition and also help with the diagnosis of patients with similar physical characteristics by referring them to a neurosurgeon. Key words: Arnold-Chiari malformation, facial pain, diagnosis, orofacial.

  10. Concentration distribution of trace elements: from normal distribution to Levy flights

    International Nuclear Information System (INIS)

    Kubala-Kukus, A.; Banas, D.; Braziewicz, J.; Majewska, U.; Pajek, M.

    2003-01-01

    The paper discusses a nature of concentration distributions of trace elements in biomedical samples, which were measured by using the X-ray fluorescence techniques (XRF, TXRF). Our earlier observation, that the lognormal distribution well describes the measured concentration distribution is explained here on a more general ground. Particularly, the role of random multiplicative process, which models the concentration distributions of trace elements in biomedical samples, is discussed in detail. It is demonstrated that the lognormal distribution, appearing when the multiplicative process is driven by normal distribution, can be generalized to the so-called log-stable distribution. Such distribution describes the random multiplicative process, which is driven, instead of normal distribution, by more general stable distribution, being known as the Levy flights. The presented ideas are exemplified by the results of the study of trace element concentration distributions in selected biomedical samples, obtained by using the conventional (XRF) and (TXRF) X-ray fluorescence methods. Particularly, the first observation of log-stable concentration distribution of trace elements is reported and discussed here in detail

  11. Development of Axial Continuous Metal Expeller for melt conditioning of alloys

    Science.gov (United States)

    Cassinath, Z.; Prasada Rao, A. K.

    2016-02-01

    ACME (Axial, centrifugal metal expeller) is a novel processing technology developed independently for conditioning liquid metal prior to solidification processing. The ACME process is based on an axial compressor and uses a rotor stator mechanism to impose a high shear rate and a high intensity of turbulence to the liquid metal, so that the conditioned liquid metal has uniform temperature and uniform chemical composition as it is expelled. The microstructural refinement is achieved through the process of dendrite fragmentation while taking advantage of the thixotropic property of semisolid metal slurry so that it can be conveyed for further downstream operations. This paper introduces the concept and its advantages over current technologies.

  12. Combining Generalized Renewal Processes with Non-Extensive Entropy-Based q-Distributions for Reliability Applications

    Directory of Open Access Journals (Sweden)

    Isis Didier Lins

    2018-03-01

    Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.

  13. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  14. Simulation of an adsorption solar cooling system

    International Nuclear Information System (INIS)

    Hassan, H.Z.; Mohamad, A.A.; Bennacer, R.

    2011-01-01

    A more realistic theoretical simulation model for a tubular solar adsorption refrigerating system using activated carbon-methanol (AC/M) pair has been introduced. The mathematical model represents the heat and mass transfer inside the adsorption bed, the condenser, and the evaporator. The simulation technique takes into account the variations of ambient temperature and solar radiation along the day. Furthermore, the local pressure, and local thermal conductivity variations in space and time inside the tubular reactor are investigated as well. A C++ computer program is written to solve the proposed numerical model using the finite difference method. The developed program covers the operations of all the system components along the cycle time. The performance of the tubular reactor, the condenser, and the evaporator has been discussed. Time allocation chart and switching operations for the solar refrigeration system processes are illustrated as well. The case studied has a 1 m 2 surface area solar flat plate collector integrated with a 20 stainless steel tubes containing the AC/M pair and each tube has a 5 cm outer diameter. In addition, the condenser pressure is set to 54.2 kpa. It has been found that, the solar coefficient of performance and the specific cooling power of the system are 0.211 and 2.326 respectively. In addition, the pressure distribution inside the adsorption bed has been found nearly uniform and varying only with time. Furthermore, the AC/M thermal conductivity is shown to be constant in both space and time.

  15. Spatial patterns in the distribution of kimberlites: relationship to tectonic processes and lithosphere structure

    DEFF Research Database (Denmark)

    Chemia, Zurab; Artemieva, Irina; Thybo, Hans

    2015-01-01

    of kimberlite melts through the lithospheric mantle, which forms the major pipe. Stage 2 (second-order process) begins when the major pipe splits into daughter sub-pipes (tree-like pattern) at crustal depths. We apply cluster analysis to the spatial distribution of all known kimberlite fields with the goal...

  16. Exposure characteristics of positive tone electron beam resist containing p-chloro-α-methylstyrene

    Science.gov (United States)

    Ochiai, Shunsuke; Takayama, Tomohiro; Kishimura, Yukiko; Asada, Hironori; Sonoda, Manae; Iwakuma, Minako; Hoshino, Ryoichi

    2017-07-01

    The positive tone resist consisted of methyl-α-chloroacrylate (ACM) and α-methylstyrene (MS) has higher sensitivity and higher dry etching resistance than poly (methylmethacrylate) (PMMA) due to the presence of a chlorine atom and a phenyl group. Copolymers consisted of ACM and p-chloro-α-methylstyrene (PCMS), where the additional chlorine atom is introduced in phenyl group compared with ACM-MS resist are synthesized and their exposure characteristics are investigated. ACM-PCMS resist with the ACM:PCMS composition ratio of 49:51 indicates the high solubility for amyl acetate developer. As the ACM composition ratio increases, the solubility of ACM-PCMS resist is suppressed. In both ACM-PCMS and ACM-MS resists, the sensitivity decreases while the contrast increases with increasing ACM ratio. When the composition ratio of ACM:PCMS is 69:31, 100/100 nm line and space pattern having a good shape is obtained at 120 μC/cm2 which is comparable to the required exposure dose for conventional ACM-MS resist with ACM:MS=50:50. Dry etching resistance of ACM:PCMS resists for Ar gas is also presented.

  17. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  18. Green Software Engineering Adaption In Requirement Elicitation Process

    Directory of Open Access Journals (Sweden)

    Umma Khatuna Jannat

    2015-08-01

    Full Text Available A recent technology investigates the role of concern in the environment software that is green software system. Now it is widely accepted that the green software can fit all process of software development. It is also suitable for the requirement elicitation process. Now a days software companies have used requirements elicitation techniques in an enormous majority. Because this process plays more and more important roles in software development. At the present time most of the requirements elicitation process is improved by using some techniques and tools. So that the intention of this research suggests to adapt green software engineering for the intention of existing elicitation technique and recommend suitable actions for improvement. This research being involved qualitative data. I used few keywords in my searching procedure then searched IEEE ACM Springer Elsevier Google scholar Scopus and Wiley. Find out articles which published in 2010 until 2016. Finding from the literature review Identify 15 traditional requirement elicitations factors and 23 improvement techniques to convert green engineering. Lastly The paper includes a squat review of the literature a description of the grounded theory and some of the identity issues related finding of the necessity for requirements elicitation improvement techniques.

  19. Interevent Time Distribution of Renewal Point Process, Case Study: Extreme Rainfall in South Sulawesi

    Science.gov (United States)

    Sunusi, Nurtiti

    2018-03-01

    The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.

  20. New technology for fabrication of multifilament NbTi composite

    International Nuclear Information System (INIS)

    Yang, Y.K.; Ma, W.M.; Peng, W.N.

    1988-01-01

    Explosive bonding-rolling-drawing process has been developed to produce NbTi multifilamentary superconductors. Multifilamentary wires of 0.5mm in diameter and 2.5km in length with 199 filaments of 25μm in dia. have been produced using this process. The critical current density (Jc) is 1.94x10 5 A/cm 2 at 4.2K and 5T for short samples and 4.9x10 4 A/cm 2 and 4.2/K and 8.5T for the magnet have been reached

  1. A model for the distribution channels planning process

    NARCIS (Netherlands)

    Neves, M.F.; Zuurbier, P.; Campomar, M.C.

    2001-01-01

    Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these

  2. Simultaneous measurement of current and temperature distributions in a proton exchange membrane fuel cell during cold start processes

    International Nuclear Information System (INIS)

    Jiao Kui; Alaefour, Ibrahim E.; Karimi, Gholamreza; Li Xianguo

    2011-01-01

    Cold start is critical to the commercialization of proton exchange membrane fuel cell (PEMFC) in automotive applications. Dynamic distributions of current and temperature in PEMFC during various cold start processes determine the cold start characteristics, and are required for the optimization of design and operational strategy. This study focuses on an investigation of the cold start characteristics of a PEMFC through the simultaneous measurements of current and temperature distributions. An analytical model for quick estimate of purging duration is also developed. During the failed cold start process, the highest current density is initially near the inlet region of the flow channels, then it moves downstream, reaching the outlet region eventually. Almost half of the cell current is produced in the inlet region before the cell current peaks, and the region around the middle of the cell has the best survivability. These two regions are therefore more important than other regions for successful cold start through design and operational strategy, such as reducing the ice formation and enhancing the heat generation in these two regions. The evolution of the overall current density distribution over time remains similar during the successful cold start process; the current density is the highest near the flow channel inlets and generally decreases along the flow direction. For both the failed and the successful cold start processes, the highest temperature is initially in the flow channel inlet region, and is then around the middle of the cell after the overall peak current density is reached. The ice melting and liquid formation during the successful cold start process have negligible influence on the general current and temperature distributions.

  3. A trade-off between local and distributed information processing associated with remote episodic versus semantic memory.

    Science.gov (United States)

    Heisz, Jennifer J; Vakorin, Vasily; Ross, Bernhard; Levine, Brian; McIntosh, Anthony R

    2014-01-01

    Episodic memory and semantic memory produce very different subjective experiences yet rely on overlapping networks of brain regions for processing. Traditional approaches for characterizing functional brain networks emphasize static states of function and thus are blind to the dynamic information processing within and across brain regions. This study used information theoretic measures of entropy to quantify changes in the complexity of the brain's response as measured by magnetoencephalography while participants listened to audio recordings describing past personal episodic and general semantic events. Personal episodic recordings evoked richer subjective mnemonic experiences and more complex brain responses than general semantic recordings. Critically, we observed a trade-off between the relative contribution of local versus distributed entropy, such that personal episodic recordings produced relatively more local entropy whereas general semantic recordings produced relatively more distributed entropy. Changes in the relative contributions of local and distributed entropy to the total complexity of the system provides a potential mechanism that allows the same network of brain regions to represent cognitive information as either specific episodes or more general semantic knowledge.

  4. 40 CFR Appendix A to Subpart M of... - Interpretive Rule Governing Roof Removal Operations

    Science.gov (United States)

    2010-07-01

    ...-containing material (ACM) is material containing more than one percent asbestos as determined using the... NESHAP classifies ACM as either “friable” or “nonfriable”. Friable ACM is ACM that, when dry, can be crumbled, pulverized or reduced to powder by hand pressure. Nonfriable ACM is ACM that, when dry, cannot be...

  5. Novel derivatives of aclacinomycin A block cancer cell migration through inhibition of farnesyl transferase.

    Science.gov (United States)

    Magi, Shigeyuki; Shitara, Tetsuo; Takemoto, Yasushi; Sawada, Masato; Kitagawa, Mitsuhiro; Tashiro, Etsu; Takahashi, Yoshikazu; Imoto, Masaya

    2013-03-01

    In the course of screening for an inhibitor of farnesyl transferase (FTase), we identified two compounds, N-benzyl-aclacinomycin A (ACM) and N-allyl-ACM, which are new derivatives of ACM. N-benzyl-ACM and N-allyl-ACM inhibited FTase activity with IC50 values of 0.86 and 2.93 μM, respectively. Not only ACM but also C-10 epimers of each ACM derivative failed to inhibit FTase. The inhibition of FTase by N-benzyl-ACM and N-allyl-ACM seems to be specific, because these two compounds did not inhibit geranylgeranyltransferase or geranylgeranyl pyrophosphate (GGPP) synthase up to 100 μM. In cultured A431 cells, N-benzyl-ACM and N-allyl-ACM also blocked both the membrane localization of H-Ras and activation of the H-Ras-dependent PI3K/Akt pathway. In addition, they inhibited epidermal growth factor (EGF)-induced migration of A431 cells. Thus, N-benzyl-ACM and N-allyl-ACM inhibited EGF-induced migration of A431 cells by inhibiting the farnesylation of H-Ras and subsequent H-Ras-dependent activation of the PI3K/Akt pathway.

  6. High current density in bulk YBa2Cu3O/sub x/ superconductor

    International Nuclear Information System (INIS)

    Salama, K.; Selvamanickam, V.; Gao, L.; Sun, K.

    1989-01-01

    A liquid phase processing method for the fabrication of bulk YBa 2 Cu 3 O/sub x/ superconductors with large current carrying capacity has been developed. Slow cooling through the peritectic transformation (1030--980 degree C) has been shown to control the microstructure of these superconductors. A cooling rate of 1 degree C/h in this temperature range has yielded a microstructure with long plate type, thick grains oriented over a wide area. Current density up to 18 500 A/cm 2 has been obtained by continuous direct current measurements and in excess of 62 000 A/cm 2 with pulse current of 10 ms duration and 75 000 A/cm 2 using 1 ms pulse. The strong magnetic field dependence observed in sintered bulk 1-2-3 superconductors is also minimized to a large extent where a current density in excess of 37 000 A/cm 2 is obtained in a field of 6000 G

  7. 78 FR 15014 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Science.gov (United States)

    2013-03-08

    ... Meyer, Jr., individually and as trustee of the: ACM, Jr. 2010 3Y GRAT A, the ACM, Jr. 2010 3Y GRAT B, the ACM, Jr. 2010 3Y GRAT C, the ACM, Jr. 2013 2Y GRAT A, the ACM, Jr. 2013 2Y GRAT B, the ACM, Jr. 2013 2Y GRAT C, the ACM, Jr. 2013 2Y GRAT D, the Katharine Clara Kimmel Non- Exempt Trust C/U Elisabeth...

  8. Process-based distributed modeling approach for analysis of sediment dynamics in a river basin

    Directory of Open Access Journals (Sweden)

    M. A. Kabir

    2011-04-01

    Full Text Available Modeling of sediment dynamics for developing best management practices of reducing soil erosion and of sediment control has become essential for sustainable management of watersheds. Precise estimation of sediment dynamics is very important since soils are a major component of enormous environmental processes and sediment transport controls lake and river pollution extensively. Different hydrological processes govern sediment dynamics in a river basin, which are highly variable in spatial and temporal scales. This paper presents a process-based distributed modeling approach for analysis of sediment dynamics at river basin scale by integrating sediment processes (soil erosion, sediment transport and deposition with an existing process-based distributed hydrological model. In this modeling approach, the watershed is divided into an array of homogeneous grids to capture the catchment spatial heterogeneity. Hillslope and river sediment dynamic processes have been modeled separately and linked to each other consistently. Water flow and sediment transport at different land grids and river nodes are modeled using one dimensional kinematic wave approximation of Saint-Venant equations. The mechanics of sediment dynamics are integrated into the model using representative physical equations after a comprehensive review. The model has been tested on river basins in two different hydro climatic areas, the Abukuma River Basin, Japan and Latrobe River Basin, Australia. Sediment transport and deposition are modeled using Govers transport capacity equation. All spatial datasets, such as, Digital Elevation Model (DEM, land use and soil classification data, etc., have been prepared using raster "Geographic Information System (GIS" tools. The results of relevant statistical checks (Nash-Sutcliffe efficiency and R–squared value indicate that the model simulates basin hydrology and its associated sediment dynamics reasonably well. This paper presents the

  9. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  10. IOC-UNEP review meeting on oceanographic processes of transport and distribution of pollutants in the sea

    International Nuclear Information System (INIS)

    1991-01-01

    The IOC-UNEP Review Meeting on Oceanographic Processes of Transfer and Distribution of Pollutants in the Sea was opened at the Ruder Boskovic Institute, Zagreb, Yugoslavia on Monday, 15 May 1989. Papers presented at the meeting dealt with physical and geochemical processes in sea-water and sediment in transport mixing and dispersal of pollutants. The importance of mesoscale eddies and gyres in the open sea, wind-driven currents and upwelling events in the coastal zone, and thermohaline processes in semi-enclosed bays and estuaries was recognized. There is strong evidence that non-local forcing can drive circulation in the coastal area. Concentrations, horizontal and vertical distributions and transport of pollutants were investigated and presented for a number of coastal areas. Riverine and atmospheric inputs of different pollutants to the western Mediterranean were discussed. Reports on two on-going nationally/internationally co-ordinated projects (MEDMODEL, EROS 2000) were presented. Discussions during the meeting enabled an exchange of ideas between specialists in different disciplines to be made. It is expected that this will promote the future interdisciplinary approach in this field. The meeting recognized the importance of physical oceanographic studies in investigating the transfer and distribution of pollutants in the sea and in view of the importance of the interdisciplinary approach and bilateral and/or multilateral co-operation a number of recommendations were adopted

  11. A Cellular Automata Approach to Computer Vision and Image Processing.

    Science.gov (United States)

    1980-09-01

    the ACM, vol. 15, no. 9, pp. 827-837. [ Duda and Hart] R. 0. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973...Center TR-738, 1979. [Farley] Arthur M. Farley and Andrzej Proskurowski, "Gossiping in Grid Graphs", University of Oregon Computer Science Department CS-TR

  12. Radon: Chemical and physical processes associated with its distribution

    International Nuclear Information System (INIS)

    Castleman, A.W. Jr.

    1992-01-01

    Assessing the mechanisms which govern the distribution, fate, and pathways of entry into biological systems, as well as the ultimate hazards associated with the radon progeny and their secondary reaction products, depends on knowledge of their chemistry. Our studies are directed toward developing fundamental information which will provide a basis for modeling studies that are requisite in obtaining a complete picture of growth, attachment to aerosols, and transport to the bioreceptor and ultimate incorporation within. Our program is divided into three major areas of research. These include measurement of the determination of their mobilities, study of the role of radon progeny ions in affecting reactions, including study of the influence of the degree of solvation (clustering), and examination of the important secondary reaction products, with particular attention to processes leading to chemical conversion of either the core ions or the ligands as a function of the degree of clustering

  13. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan

    2016-01-01

    create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which...... the garbage collection time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution time in cases without data spilling and 41.6x speedup in cases with data spilling, and 3) to consume up to 46.6% less memory.......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...

  14. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  15. High critical currents in Y-Ba-Cu-O superconductors

    International Nuclear Information System (INIS)

    Jin, S.; Tiefel, T.H.; Sherwood, R.C.; Davis, M.E.; van Dover, R.B.; Kammlott, G.W.; Fastnacht, R.A.; Keith, H.D.

    1988-01-01

    Melt-textured growth of polycrystalline YBa 2 Cu 3 O 7 √/sub δ/ superconductor using directional solidification created an essentially 100% dense structure consisting of long, needle- or plate-shaped crystals preferentially aligned parallel to the a-b conduction plane. The new microstructure, which completely replaces the previous granular and random structure in the sintered precursor, exhibits dramatically improved transport J/sub c/ values at 77 K of ∼17 000 Acm 2 in zero field and ∼4000 Acm 2 at H = 1 T (as compared to ∼500 and ∼1 Acm 2 , respectively, for the as-sintered structure), with the severe field dependence of J/sub c/ (''weak-link'' problem) no longer evident in the new melt-textured material. The improvement in J/sub c/ is attributed to the combined effects of densification, alignment of crystals, and formation of cleaner grain boundaries. Microstructure and distribution of various phases present in the melt-textured material are discussed in relation to the superconducting properties

  16. Extending the LWS Data Environment: Distributed Data Processing and Analysis

    Science.gov (United States)

    Narock, Thomas

    2005-01-01

    The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data

  17. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Energy Technology Data Exchange (ETDEWEB)

    Senvar, O.; Sennaroglu, B.

    2016-07-01

    This study examines Clements’ Approach (CA), Box-Cox transformation (BCT), and Johnson transformation (JT) methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI) Ppu is handled for process capability analysis (PCA) because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD), which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB) and the Relative Root Mean Square Error (RRMSE) are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations... (Author)

  18. Further improvement in ABWR (part-4) open distributed plant process computer system

    International Nuclear Information System (INIS)

    Makino, Shigenori; Hatori, Yoshinori

    1999-01-01

    In the nuclear industry of Japan, the electric power companies have promoted the plant process computer (PPC) technology of nuclear power plant (NPP). When PPC was introduced to NPP for the first time, because of very tight requirement such as high reliability, high speed processing, the large-scale customized computer was applied. As for recent computer field, the large market of computer contributes to the remarkable progress of engineering work station (EWS) and personal computer (PC) technology. Moreover because the data transmission technology has been progressing at the same time, world wide computer network has been established. Thanks to progress of both technologies, the distributed computer system has been established at reasonable price. So Tokyo Electric Power Company (TEPCO) is trying to apply it for PPC of NPP. (author)

  19. Distributed plot-making

    DEFF Research Database (Denmark)

    Jensen, Lotte Groth; Bossen, Claus

    2016-01-01

    different socio-technical systems (paper-based and electronic patient records). Drawing on the theory of distributed cognition and narrative theory, primarily inspired by the work done within health care by Cheryl Mattingly, we propose that the creation of overview may be conceptualised as ‘distributed plot......-making’. Distributed cognition focuses on the role of artefacts, humans and their interaction in information processing, while narrative theory focuses on how humans create narratives through the plot construction. Hence, the concept of distributed plot-making highlights the distribution of information processing...

  20. Fair fund distribution for a municipal incinerator using GIS-based fuzzy analytic hierarchy process.

    Science.gov (United States)

    Chang, Ni-Bin; Chang, Ying-Hsi; Chen, Ho-Wen

    2009-01-01

    Burning municipal solid waste (MSW) can generate energy and reduce the waste volume, which delivers benefits to society through resources conservation. But current practices by society are not sustainable because the associated environmental impacts of waste incineration on urbanized regions have been a long-standing concern in local communities. Public reluctance with regard to accepting the incinerators as typical utilities often results in an intensive debate concerning how much welfare is lost for those residents living in the vicinity of those incinerators. As the measure of welfare change with respect to environmental quality constraints nearby these incinerators remains critical, new arguments related to how to allocate the fair fund among affected communities became a focal point in environmental management. Given the fact that most County fair fund rules allow a great deal of flexibility for redistribution, little is known about what type of methodology may be a good fit to determine the distribution of such a fair fund under uncertainty. This paper purports to demonstrate a system-based approach that helps any fair fund distribution, which is made with respect to residents' possible claim for fair damages due to the installation of a new incinerator. Holding a case study using integrated geographic information system (GIS) and fuzzy analytic hierarchy process (FAHP) for finding out the most appropriate distribution strategy between two neighboring towns in Taipei County, Taiwan demonstrates the application potential. Participants in determining the use of a fair fund also follow a highly democratic procedure where all stakeholders involved eventually express a high level of satisfaction with the results facilitating the final decision making process. It ensures that plans for the distribution of such a fair fund were carefully thought out and justified with a multi-faceted nature that covers political, socio-economic, technical, environmental, public

  1. MiR-320a as a Potential Novel Circulating Biomarker of Arrhythmogenic CardioMyopathy.

    Science.gov (United States)

    Sommariva, Elena; D'Alessandra, Yuri; Farina, Floriana Maria; Casella, Michela; Cattaneo, Fabio; Catto, Valentina; Chiesa, Mattia; Stadiotti, Ilaria; Brambilla, Silvia; Dello Russo, Antonio; Carbucicchio, Corrado; Vettor, Giulia; Riggio, Daniela; Sandri, Maria Teresa; Barbuti, Andrea; Vernillo, Gianluca; Muratori, Manuela; Dal Ferro, Matteo; Sinagra, Gianfranco; Moimas, Silvia; Giacca, Mauro; Colombo, Gualtiero Ivanoe; Pompilio, Giulio; Tondo, Claudio

    2017-07-06

    Diagnosis of Arrhythmogenic CardioMyopathy (ACM) is challenging and often late after disease onset. No circulating biomarkers are available to date. Given their involvement in several cardiovascular diseases, plasma microRNAs warranted investigation as potential non-invasive diagnostic tools in ACM. We sought to identify circulating microRNAs differentially expressed in ACM with respect to Healthy Controls (HC) and Idiopathic Ventricular Tachycardia patients (IVT), often in differential diagnosis. ACM and HC subjects were screened for plasmatic expression of 377 microRNAs and validation was performed in 36 ACM, 53 HC, 21 IVT. Variable importance in data partition was estimated through Random Forest analysis and accuracy by Receiver Operating Curves. Plasmatic miR-320a showed 0.53 ± 0.04 fold expression difference in ACM vs. HC (p ACM (n = 13) and HC (n = 17) with athletic lifestyle, a ACM precipitating factor. Importantly, ACM patients miR-320a showed 0.78 ± 0.05 fold expression change vs. IVT (p = 0.03). When compared to non-invasive ACM diagnostic parameters, miR-320a ranked highly in discriminating ACM vs. IVT and it increased their accuracy. Finally, miR-320a expression did not correlate with ACM severity. Our data suggest that miR-320a may be considered a novel potential biomarker of ACM, specifically useful in ACM vs. IVT differentiation.

  2. Development of Axial Continuous Metal Expeller for melt conditioning of alloys

    International Nuclear Information System (INIS)

    Cassinath, Z.; Prasada Rao, A.K.

    2016-01-01

    ACME (Axial, centrifugal metal expeller) is a novel processing technology developed independently for conditioning liquid metal prior to solidification processing. The ACME process is based on an axial compressor and uses a rotor stator mechanism to impose a high shear rate and a high intensity of turbulence to the liquid metal, so that the conditioned liquid metal has uniform temperature and uniform chemical composition as it is expelled. The microstructural refinement is achieved through the process of dendrite fragmentation while taking advantage of the thixotropic property of semisolid metal slurry so that it can be conveyed for further downstream operations. This paper introduces the concept and its advantages over current technologies. (paper)

  3. MODELING OF WATER DISTRIBUTION SYSTEM PARAMETERS AND THEIR PARTICULAR IMPORTANCE IN ENVIRONMENT ENGINEERING PROCESSES

    Directory of Open Access Journals (Sweden)

    Agnieszka Trębicka

    2016-05-01

    Full Text Available The object of this study is to present a mathematical model of water-supply network and the analysis of basic parameters of water distribution system with a digital model. The reference area is Kleosin village, municipality Juchnowiec Kościelny in podlaskie province, located at the border with Białystok. The study focused on the significance of every change related to the quality and quantity of water delivered to WDS through modeling the basic parameters of water distribution system in different variants of work in order to specify new, more rational ways of exploitation (decrease in pressure value and to define conditions for development and modernization of the water-supply network, with special analysis of the scheme, in frames of specification of the most dangerous places in the network. The analyzed processes are based on copying and developing the existing state of water distribution sub-system (the WDS with the use of mathematical modeling that includes the newest accessible computer techniques.

  4. The effect of electrodeposition process parameters on the current density distribution in an electrochemical cell

    Directory of Open Access Journals (Sweden)

    R. M. STEVANOVIC

    2001-02-01

    Full Text Available Cell voltage – current density dependences for a model electrochemical cell of fixed geometry were calculated for different electrolyte conductivities, Tafel slopes and cathodic exchange current densities. The ratio between the current density at the part of the cathode nearest to the anode and the one furthest away were taken as a measure for the estimation of the current density distribution. The calculations reveal that increasing the conductivity of the electrolyte, as well as increasing the cathodic Tafel slope should both improve the current density distribution. Also, the distribution should be better under total activation control or total diffusion control rather than at mixed activation-diffusion-Ohmic control of the deposition process. On the contrary, changes in the exchange current density should not affect it. These results, being in agreement with common knowledge about the influence of different parameters on the current distribution in an electrochemical cell, demonstrate that a quick estimation of the current distribution can be performed by a simple comparison of the current density at the point of the cathode closest to anode with that at furthest point.

  5. Identification of key peptidoglycan hydrolases for morphogenesis, autolysis, and peptidoglycan composition of Lactobacillus plantarum WCFS1

    Directory of Open Access Journals (Sweden)

    Rolain Thomas

    2012-10-01

    Full Text Available Abstract Background Lactobacillus plantarum is commonly used in industrial fermentation processes. Selected strains are also marketed as probiotics for their health beneficial effects. Although the functional role of peptidoglycan-degrading enzymes is increasingly documented to be important for a range of bacterial processes and host-microbe interactions, little is known about their functional roles in lactobacilli. This knowledge holds important potential for developing more robust strains resistant to autolysis under stress conditions as well as peptidoglycan engineering for a better understanding of the contribution of released muramyl-peptides as probiotic immunomodulators. Results Here, we explored the functional role of the predicted peptidoglycan hydrolase (PGH complement encoded in the genome of L. plantarum by systematic gene deletion. From twelve predicted PGH-encoding genes, nine could be individually inactivated and their corresponding mutant strains were characterized regarding their cell morphology, growth, and autolysis under various conditions. From this analysis, we identified two PGHs, the predicted N-acetylglucosaminidase Acm2 and NplC/P60 D,L-endopeptidase LytA, as key determinants in the morphology of L. plantarum. Acm2 was demonstrated to be required for the ultimate step of cell separation of daughter cells, whereas LytA appeared to be required for cell shape maintenance and cell-wall integrity. We also showed by autolysis experiments that both PGHs are involved in the global autolytic process with a dominant role for Acm2 in all tested conditions, identifying Acm2 as the major autolysin of L. plantarum WCFS1. In addition, Acm2 and the putative N-acetylmuramidase Lys2 were shown to play redundant roles in both cell separation and autolysis under stress conditions. Finally, the analysis of the peptidoglycan composition of Acm2- and LytA-deficient derivatives revealed their potential hydrolytic activities by the

  6. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    Science.gov (United States)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust

  7. Exploring digenic inheritance in arrhythmogenic cardiomyopathy.

    Science.gov (United States)

    König, Eva; Volpato, Claudia Béu; Motta, Benedetta Maria; Blankenburg, Hagen; Picard, Anne; Pramstaller, Peter; Casella, Michela; Rauhe, Werner; Pompilio, Giulio; Meraviglia, Viviana; Domingues, Francisco S; Sommariva, Elena; Rossini, Alessandra

    2017-12-08

    Arrhythmogenic cardiomyopathy (ACM) is an inherited genetic disorder, characterized by the substitution of heart muscle with fibro-fatty tissue and severe ventricular arrhythmias, often leading to heart failure and sudden cardiac death. ACM is considered a monogenic disorder, but the low penetrance of mutations identified in patients suggests the involvement of additional genetic or environmental factors. We used whole exome sequencing to investigate digenic inheritance in two ACM families where previous diagnostic tests have revealed a PKP2 mutation in all affected and some healthy individuals. In family members with PKP2 mutations we determined all genes that harbor variants in affected but not in healthy carriers or vice versa. We computationally prioritized the most likely candidates, focusing on known ACM genes and genes related to PKP2 through protein interactions, functional relationships, or shared biological processes. We identified four candidate genes in family 1, namely DAG1, DAB2IP, CTBP2 and TCF25, and eleven candidate genes in family 2. The most promising gene in the second family is TTN, a gene previously associated with ACM, in which the affected individual harbors two rare deleterious-predicted missense variants, one of which is located in the protein's only serine kinase domain. In this study we report genes that might act as digenic players in ACM pathogenesis, on the basis of co-segregation with PKP2 mutations. Validation in larger cohorts is still required to prove the utility of this model.

  8. Towards a New Paradigm of Software Development: an Ambassador Driven Process in Distributed Software Companies

    Science.gov (United States)

    Kumlander, Deniss

    The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.

  9. Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models.

    Science.gov (United States)

    Kim, Minjeong; Wu, Guorong; Guo, Yanrong; Shen, Dinggang

    2015-04-01

    Accurate segmentation of a set of regions of interest (ROIs) in the brain images is a key step in many neuroscience studies. Due to the complexity of image patterns, many learning-based segmentation methods have been proposed, including auto context model (ACM) that can capture high-level contextual information for guiding segmentation. However, since current ACM can only handle one ROI at a time, neighboring ROIs have to be labeled separately with different ACMs that are trained independently without communicating each other. To address this, we enhance the current single-ROI learning ACM to multi-ROI learning ACM for joint labeling of multiple neighboring ROIs (called e ACM). First, we extend current independently-trained single-ROI ACMs to a set of jointly-trained cross-ROI ACMs, by simultaneous training of ACMs for all spatially-connected ROIs to let them to share their respective intermediate outputs for coordinated labeling of each image point. Then, the context features in each ACM can capture the cross-ROI dependence information from the outputs of other ACMs that are designed for neighboring ROIs. Second, we upgrade the output labeling map of each ACM with the multi-scale representation, thus both local and global context information can be effectively used to increase the robustness in characterizing geometric relationship among neighboring ROIs. Third, we integrate ACM into a multi-atlases segmentation paradigm, for encompassing high variations among subjects. Experiments on LONI LPBA40 dataset show much better performance by our e ACM, compared to the conventional ACM.

  10. Energy Consumption in the Process of Excavator-Automobile Complexes Distribution at Kuzbass Open Pit Mines

    Directory of Open Access Journals (Sweden)

    Panachev Ivan

    2017-01-01

    Full Text Available Every year worldwide coal mining companies seek to maintain the tendency of the mining machine fleet renewal. Various activities to maintain the service life of already operated mining equipment are implemented. In this regard, the urgent issue is the problem of efficient distribution of available machines in different geological conditions. The problem of “excavator-automobile” complex effective distribution occurs when heavy dump trucks are used in mining. For this reason, excavation and transportation of blasted rock mass are the most labor intensive and costly processes, considering the volume of transported overburden and coal, as well as diesel fuel, electricity, fuel and lubricants costs, consumables for repair works and downtime, etc. Currently, it is recommended to take the number of loading buckets in the range of 3 to 5, according to which the dump trucks are distributed to faces.

  11. Optimization of the Refractive-Index Distribution of Graded-Index Polymer Optical Fiber by the Diffusion-Assisted Fabrication Process

    Science.gov (United States)

    Mukawa, Yoshiki; Kondo, Atsushi; Koike, Yasuhiro

    2012-04-01

    Graded-index polymer optical fiber (GI-POF) is a promising high-speed communication medium for very-short-reach networks, such as home or office networks. The refractive-index distribution of GI-POF needs to be accurately controlled to maximize the bandwidth. We attempted to control the refractive-index distribution by developing a simulation for dopant diffusion. In the rod-in-tube method, GI-POF with an optimal refractive-index distribution was obtained by adjusting the diffusion temperature and the diffusion time, whereas in the coextrusion process, GI-POF with an optimal refractive-index distribution was fabricated by controlling the length of the diffusion tube and the rate of discharge of polymer.

  12. Effect of processing conditions on residual stress distributions by bead-on-plate welding after surface machining

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Mochizuki, Masahito

    2014-01-01

    Residual stress is important factor for stress corrosion cracking (SCC) that has been observed near the welded zone in nuclear power plants. Especially, surface residual stress is significant for SCC initiation. In the joining processes of pipes, butt welding is conducted after surface machining. Residual stress is generated by both processes, and residual stress distribution due to surface machining is varied by the subsequent butt welding. In previous paper, authors reported that residual stress distribution generated by bead on plate welding after surface machining has a local maximum residual stress near the weld metal. The local maximum residual stress shows approximately 900 MPa that exceeds the stress threshold for SCC initiation. Therefore, for the safety improvement of nuclear power plants, a study on the local maximum residual stress is important. In this study, the effect of surface machining and welding conditions on residual stress distribution generated by welding after surface machining was investigated. Surface machining using lathe machine and bead on plate welding with tungsten inert gas (TIG) arc under various conditions were conducted for plate specimens made of SUS316L. Then, residual stress distributions were measured by X-ray diffraction method (XRD). As a result, residual stress distributions have the local maximum residual stress near the weld metal in all specimens. The values of the local maximum residual stresses are almost the same. The location of the local maximum residual stress is varied by welding condition. It could be consider that the local maximum residual stress is generated by same generation mechanism as welding residual stress in surface machined layer that has high yield stress. (author)

  13. Studies on the effects of adriamycin-derivatives in combination with X-rays on MeWo- and Be11-cells

    International Nuclear Information System (INIS)

    Krueger, M.; Beuningen, D. van; Streffer, C.

    1990-01-01

    The survival rate of human melanoma cells after X-ray irradiation, treamtment with adriamycin derivatives and combined treatment with X-rays and adriamycin derivatives was measured by means of the colony formation test. After X-ray irradiation the melanoma cells showed a high resistance for cell survival. In all tests the Be11-cells were more resistant than MeWo-cells. On combined exposure especially with higher doses of adriamycin derivatives, both cell lines showed the interesting effect, that with increasing concentration the survival rate decreased whereas the D 0 increased. Aclacinomycin-A (ACM-A) and Pirarubicin reduced recovery processes after X-ray irradiation. Therefore Be11-cells showed a four times higher DMF (dosis modifying factor) after ACM-A-treatment than MeWo-cells. Low ACM-A-concentrations combined with low X-ray doses showed on both cell lines supraadditive effects. The effect of pirarubicin was in most of the tests only additive. Compared with ACM-A, pirarubicin was less cytotoxic, showed a larger therapeutic range, caused a smaller D 0 and D q and had a supraadditive effect in low concentrations on both cell lines. For clinical combined therapy with patients ACM-A is probably better suited than pirarubicin. (orig.) [de

  14. Effects of MgO impurities and micro-cracks on the critical current density of Ti-sheathed MgB2 wires

    International Nuclear Information System (INIS)

    Liang, G.; Alessandrini, M.; Yen, F.; Hanna, M.; Fang, H.; Hoyt, C.; Lv, B.; Zeng, J.; Salama, K.

    2007-01-01

    Ti-sheathed monocore MgB 2 wires with improved magnetic critical current density (J c ) have been fabricated by in situ powder-in-tube (PIT) method and characterized by magnetization, X-ray diffraction, scanning electron microscopy and electrical resistivity measurements. For the best wire, the magnetic J c values at 5 K and fields of 2 T, 5 T, and 8 T are 4.1 x 10 5 A/cm 2 , 7.8 x 10 4 A/cm 2 , and 1.4 x 10 4 A/cm 2 , respectively. At 20 K and fields of 0.5 T and 3 T, the J c values are about 3.6 x 10 5 A/cm 2 and 3.1 x 10 4 A/cm 2 , respectively, which are much higher than those of the Fe-sheathed mono-core MgB 2 wires fabricated with the same in situ PIT process and under the same fabricating conditions. It appears that the overall J c for the average Ti-sheathed wires is comparable to that of the Fe-sheathed wires. Our X-ray diffraction and scanning electron microscopy analysis indicates that J c in the Ti-sheathed MgB 2 wires can be strongly suppressed by MgO impurities and micro-cracks

  15. Wear Process Analysis of the Polytetrafluoroethylene/Kevlar Twill Fabric Based on the Components’ Distribution Characteristics

    Directory of Open Access Journals (Sweden)

    Gu Dapeng

    2017-12-01

    Full Text Available Polytetrafluoroethylene (PTFE/Kevlar fabric or fabric composites with excellent tribological properties have been considered as important materials used in bearings and bushing, for years. The components’ (PTFE, Kevlar, and the gap between PTFE and Kevlar distribution of the PTFE/Kevlar fabric is uneven due to the textile structure controlling the wear process and behavior. The components’ area ratio on the worn surface varying with the wear depth was analyzed not only by the wear experiment, but also by the theoretical calculations with our previous wear geometry model. The wear process and behavior of the PTFE/Kevlar twill fabric were investigated under dry sliding conditions against AISI 1045 steel by using a ring-on-plate tribometer. The morphologies of the worn surface were observed by the confocal laser scanning microscopy (CLSM. The wear process of the PTFE/Kevlar twill fabric was divided into five layers according to the distribution characteristics of Kevlar. It showed that the friction coefficients and wear rates changed with the wear depth, the order of the antiwear performance of the previous three layers was Layer III>Layer II>Layer I due to the area ratio variation of PTFE and Kevlar with the wear depth.

  16. Distributed password cracking

    OpenAIRE

    Crumpacker, John R.

    2009-01-01

    Approved for public release, distribution unlimited Password cracking requires significant processing power, which in today's world is located at a workstation or home in the form of a desktop computer. Berkeley Open Infrastructure for Network Computing (BOINC) is the conduit to this significant source of processing power and John the Ripper is the key. BOINC is a distributed data processing system that incorporates client-server relationships to generically process data. The BOINC structu...

  17. Effect of process parameters on temperature distribution in twin-electrode TIG coupling arc

    International Nuclear Information System (INIS)

    Zhang, Guangjun; Xiong, Jun; Gao, Hongming; Wu, Lin

    2012-01-01

    The twin-electrode TIG coupling arc is a new type of welding heat source, which is generated in a single welding torch that has two tungsten electrodes insulated from each other. This paper aims at determining the distribution of temperature for the coupling arc using the Fowler–Milne method under the assumption of local thermodynamic equilibrium. The influences of welding current, arc length, and distance between both electrode tips on temperature distribution of the coupling arc were analyzed. Based on the results, a better understanding of the twin-electrode TIG welding process was obtained. -- Highlights: ► Increasing arc current will increase the coupling arc temperature. ► Arc length seldom affects the peak temperature of the coupling arc. ► Increasing arc length will increase the extension of temperature near the anode. ► Increasing distance will decrease temperatures in the central part of the arc.

  18. Mammal Distribution in Nunavut: Inuit Harvest Data and COSEWIC's Species at Risk Assessment Process

    Directory of Open Access Journals (Sweden)

    Karen A. Kowalchuk

    2012-09-01

    Full Text Available The Committee on the Status of Endangered Wildlife in Canada (COSEWIC assesses risk potential for a species by evaluating the best available information from all knowledge sources including Aboriginal traditional knowledge (ATK. Effective application of ATK in this process has been challenging. Inuit knowledge (IK of mammal distribution in Nunavut is reflected, in part, in the harvest spatial data from two comprehensive studies: the Use and Occupancy Mapping (UOM Study conducted by the Nunavut Planning Commission (NPC and the Nunavut Wildlife Harvest Study (WHS conducted by the Nunavut Wildlife Management Board (NWMB. The geographic range values of extent of occurrence (EO and area of occupancy (AO were derived from the harvest data for a selected group of mammals and applied to Phase I of the COSEWIC assessment process. Values falling below threshold values can trigger a potential risk designation of either endangered (EN or threatened (TH for the species being assessed. The IK values and status designations were compared with available COSEWIC data. There was little congruency between the two sets of data. We conclude that there are major challenges within the risk assessment process and specifically the calculation of AO that contributed to the disparity in results. Nonetheless, this application illustrated that Inuit harvest data in Nunavut represents a unique and substantial source of ATK that should be used to enrich the knowledge base on arctic mammal distribution and enhance wildlife management and conservation planning.

  19. Evaluation of Future Internet Technologies for Processing and Distribution of Satellite Imagery

    Science.gov (United States)

    Becedas, J.; Perez, R.; Gonzalez, G.; Alvarez, J.; Garcia, F.; Maldonado, F.; Sucari, A.; Garcia, J.

    2015-04-01

    Satellite imagery data centres are designed to operate a defined number of satellites. For instance, difficulties when new satellites have to be incorporated in the system appear. This occurs because traditional infrastructures are neither flexible nor scalable. With the appearance of Future Internet technologies new solutions can be provided to manage large and variable amounts of data on demand. These technologies optimize resources and facilitate the appearance of new applications and services in the traditional Earth Observation (EO) market. The use of Future Internet technologies for the EO sector were validated with the GEO-Cloud experiment, part of the Fed4FIRE FP7 European project. This work presents the final results of the project, in which a constellation of satellites records the whole Earth surface on a daily basis. The satellite imagery is downloaded into a distributed network of ground stations and ingested in a cloud infrastructure, where the data is processed, stored, archived and distributed to the end users. The processing and transfer times inside the cloud, workload of the processors, automatic cataloguing and accessibility through the Internet are evaluated to validate if Future Internet technologies present advantages over traditional methods. Applicability of these technologies is evaluated to provide high added value services. Finally, the advantages of using federated testbeds to carry out large scale, industry driven experiments are analysed evaluating the feasibility of an experiment developed in the European infrastructure Fed4FIRE and its migration to a commercial cloud: SoftLayer, an IBM Company.

  20. Vib--rotational energy distributions and relaxation processes in pulsed HF chemical lasers

    International Nuclear Information System (INIS)

    Ben-Shaul, A.; Kompa, K.L.; Schmailzl, U.

    1976-01-01

    The rate equations governing the temporal evolution of photon densities and level populations in pulsed F+H 2 →HF+H chemical lasers are solved for different initial conditions. The rate equations are solved simultaneously for all relevant vibrational--rotational levels and vibrational--rotational P-branch transitions. Rotational equilibrium is not assumed. Approximate expressions for the detailed state-to-state rate constants corresponding to the various energy transfer processes (V--V, V--R,T, R--R,T) coupling the vib--rotational levels are formulated on the basis of experimental data, approximate theories, and qualitative considerations. The main findings are as follows: At low pressures, R--T transfer cannot compete with the stimulated emission, and the laser output largely reflects the nonequilibrium energy distribution in the pumping reaction. The various transitions reach threshold and decay almost independently and simultaneous lasing on several lines takes place. When a buffer gas is added in excess to the reacting mixture, the enhanced rotational relaxation leads to nearly single-line operation and to the J shift in lasing. Laser efficiency is higher at high inert gas pressures owing to a better extraction of the internal energy from partially inverted populations. V--V exchange enhances lasing from upper vibrational levels but reduces the total pulse intensity. V--R,T processes reduce the efficiency but do not substantially modify the spectral output distribution. The photon yield ranges between 0.4 and 1.4 photons/HF molecule depending on the initial conditions. Comparison with experimental data, when available, is fair

  1. Characteristics of scandate-impregnated cathodes with sub-micron scandia-doped matrices

    International Nuclear Information System (INIS)

    Yuan Haiqing; Gu Xin; Pan Kexin; Wang Yiman; Liu Wei; Zhang Ke; Wang Jinshu; Zhou Meiling; Li Ji

    2005-01-01

    We describe in this paper scandate-impregnated cathodes with sub-micron scandia-doped tungsten matrices having an improved uniformity of the Sc distribution. The scandia-doped tungsten powders were made by both liquid-solid doping and liquid-liquid doping methods on the basis of previous research. By improving pressing, sintering and impregnating procedures, we have obtained scandate-impregnated cathodes with a good uniformity of the Sc 2 O 3 - distribution. The porosity of the sub-micron structure matrix and content of impregnants inside the matrix are similar to those of conventionally impregnated cathodes. Space charge limited current densities of more than 30 A/cm 2 at 850 deg. C b have been obtained in a reproducible way. The current density continuously increases during the first 2000 h life test at 950 deg. C b with a dc load of 2 A/cm 2 and are stable for at least 3000 h

  2. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  3. Authenticated join processing in outsourced databases

    KAUST Repository

    Yang, Yin

    2009-01-01

    Database outsourcing requires that a query server constructs a proof of result correctness, which can be verified by the client using the data owner\\'s signature. Previous authentication techniques deal with range queries on a single relation using an authenticated data structure (ADS). On the other hand, authenticated join processing is inherently more complex than ranges since only the base relations (but not their combination) are signed by the owner. In this paper, we present three novel join algorithms depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. We experimentally demonstrate that the proposed methods outperform two benchmark algorithms, often by several orders of magnitude, on all performance metrics, and effectively shift the workload to the outsourcing service. Finally, we extend our techniques to complex queries that combine multi-way joins with selections and projections. ©2009 ACM.

  4. Authenticated join processing in outsourced databases

    KAUST Repository

    Yang, Yin; Papadias, Dimitris; Papadopoulos, Stavros; Kalnis, Panos

    2009-01-01

    Database outsourcing requires that a query server constructs a proof of result correctness, which can be verified by the client using the data owner's signature. Previous authentication techniques deal with range queries on a single relation using an authenticated data structure (ADS). On the other hand, authenticated join processing is inherently more complex than ranges since only the base relations (but not their combination) are signed by the owner. In this paper, we present three novel join algorithms depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. We experimentally demonstrate that the proposed methods outperform two benchmark algorithms, often by several orders of magnitude, on all performance metrics, and effectively shift the workload to the outsourcing service. Finally, we extend our techniques to complex queries that combine multi-way joins with selections and projections. ©2009 ACM.

  5. EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK

    Directory of Open Access Journals (Sweden)

    C. Wang

    2017-10-01

    Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping,city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  6. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    Science.gov (United States)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  7. Dislocation structure, aging processes and critical currents of vanadium-gallium alloys

    International Nuclear Information System (INIS)

    Pan, V.M.; Beletskii, Yu.I.; Flis, V.S.; Firstov, S.A.; Sarzhan, G.F.

    1976-04-01

    An electron microscopical investigation of the structural and phase changes in vanadium-gallium alloys was carried out in the range of concentrations corresponding to a supersaturated mixed crystal with bcc lattice during plastic deformation and annealing. The determined data were compared with the measured results of the electrophysical and superconducting properties. It was shown that the deformed and aged vanadium-gallium samples at 4.2 K could conduct a dissipation-free transport current with a density of 8 x 10 3 A/cm 2 in the transversal magnetic field of 60 kOe. Based on the experimental results of the dislocation structure, the mechanism and kinetics of the separation process, conclusions were drawn as to the character of the exceeding of the critical current density in the presence of a magnetic field in these compounds. (orig.) [de

  8. The exponential age distribution and the Pareto firm size distribution

    OpenAIRE

    Coad, Alex

    2008-01-01

    Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.

  9. New model of Brazilian electric sector: implications of sugarcane bagasse on the distributed generation process

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Celso E.L. de; Rabi, Jose A. [Universidade de Sao Paulo (GREEN/FZEA/USP), Pirassununga, SP (Brazil). Fac. de Zootecnia e Engenharia de Alimentos. Grupo de Pesquisa em Reciclagem, Eficiencia Energetica e Simulacao Numerica], Emails: celsooli@usp.br, jrabi@usp.br; Halmeman, Maria Cristina [Universidade Estadual Paulista (FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas

    2008-07-01

    Distributed generation has become an alternative for the lack of resources to large energy projects and for recent facts that have changed the geopolitical panorama. The later have increased oil prices so that unconventional sources have become more and more feasible, which is an issue usually discussed in Europe and in USA. Brazil has followed such world trend by restructuring the electrical sector as well as major related institutions, from generation to commercialization and sector regulation while local legislation has enabled the increase of distributed generation. It regulates the role of the independent energy producer so as to provide direct business between the later and a great consumer, which is an essential step to enlarge energy market. Sugarcane bagasse has been used to produce both electric energy and steam and this paper analyzes and discusses the major implications of a new model for Brazilian electric sector based on sugarcane bagasse use as means to increase distributed generation process, particularly concerned with the commercialization of energy excess. (author)

  10. Number size distribution of fine and ultrafine fume particles from various welding processes.

    Science.gov (United States)

    Brand, Peter; Lenz, Klaus; Reisgen, Uwe; Kraus, Thomas

    2013-04-01

    Studies in the field of environmental epidemiology indicate that for the adverse effect of inhaled particles not only particle mass is crucial but also particle size is. Ultrafine particles with diameters below 100 nm are of special interest since these particles have high surface area to mass ratio and have properties which differ from those of larger particles. In this paper, particle size distributions of various welding and joining techniques were measured close to the welding process using a fast mobility particle sizer (FMPS). It turned out that welding processes with high mass emission rates (manual metal arc welding, metal active gas welding, metal inert gas welding, metal inert gas soldering, and laser welding) show mainly agglomerated particles with diameters above 100 nm and only few particles in the size range below 50 nm (10 to 15%). Welding processes with low mass emission rates (tungsten inert gas welding and resistance spot welding) emit predominantly ultrafine particles with diameters well below 100 nm. This finding can be explained by considerably faster agglomeration processes in welding processes with high mass emission rates. Although mass emission is low for tungsten inert gas welding and resistance spot welding, due to the low particle size of the fume, these processes cannot be labeled as toxicologically irrelevant and should be further investigated.

  11. How Are Distributed Groups Affected by an Imposed Structuring of their Decision-Making Process?

    DEFF Research Database (Denmark)

    Lundell, Anders Lorentz; Hertzum, Morten

    2011-01-01

    Groups often suffer from ineffective communication and decision making. This experimental study compares distributed groups solving a preference task with support from either a communication system or a system providing both communication and a structuring of the decision-making process. Results...... show that groups using the latter system spend more time solving the task, spend more of their time on solution analysis, spend less of their time on disorganized activity, and arrive at task solutions with less extreme preferences. Thus, the type of system affects the decision-making process as well...... as its outcome. Notably, the task solutions arrived at by the groups using the system that imposes a structuring of the decision-making process show limited correlation with the task solutions suggested by the system on the basis of the groups’ explicitly stated criteria. We find no differences in group...

  12. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    Energy Technology Data Exchange (ETDEWEB)

    Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory

    2009-01-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  13. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined.

  14. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    International Nuclear Information System (INIS)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined

  15. Stronger Consistency and Semantics for Low-Latency Geo-Replicated Storage

    Science.gov (United States)

    2013-06-01

    Wallach, Mike Burrows , Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A distributed storage system for structured data. ACM TOCS, 26(2...propagation for weakly consistent replication. In SOSP, October 1997. [60] Larry Peterson, Andy Bavier, and Sapan Bhatia. VICCI: A programmable cloud

  16. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  17. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  18. Online learning of a Dirichlet process mixture of Beta-Liouville distributions via variational inference.

    Science.gov (United States)

    Fan, Wentao; Bouguila, Nizar

    2013-11-01

    A large class of problems can be formulated in terms of the clustering process. Mixture models are an increasingly important tool in statistical pattern recognition and for analyzing and clustering complex data. Two challenging aspects that should be addressed when considering mixture models are how to choose between a set of plausible models and how to estimate the model's parameters. In this paper, we address both problems simultaneously within a unified online nonparametric Bayesian framework that we develop to learn a Dirichlet process mixture of Beta-Liouville distributions (i.e., an infinite Beta-Liouville mixture model). The proposed infinite model is used for the online modeling and clustering of proportional data for which the Beta-Liouville mixture has been shown to be effective. We propose a principled approach for approximating the intractable model's posterior distribution by a tractable one-which we develop-such that all the involved mixture's parameters can be estimated simultaneously and effectively in a closed form. This is done through variational inference that enjoys important advantages, such as handling of unobserved attributes and preventing under or overfitting; we explain that in detail. The effectiveness of the proposed work is evaluated on three challenging real applications, namely facial expression recognition, behavior modeling and recognition, and dynamic textures clustering.

  19. Efficient adsorption of Hg (II) ions in water by activated carbon modified with melamine

    Science.gov (United States)

    Qin, Hangdao; Meng, Jingling; Chen, Jing

    2018-04-01

    Removal of Hg (II) ions from industrial wastewater is important for the water treatment, and adsorption is an efficient treatment process. Activated carbon (AC) was modified with melamine, which introduced nitrogen-containing functional groups onto AC surface. Original AC and melamine modified activated carbon (ACM) were characterized by elemental analysis, N2 adsorption-desorption, determination of the pH of the point of zero charge (pHpzc) and X-ray photoelectron spectroscopy (XPS) and their performance in the adsorption of Hg(II) ions was investigated. Langmuir model fitted the experimental data of equilibrium isotherms well. ACM showed the higher Hg (II) ions adsorption capacity, increasing more than more than 1.8 times compared to the original one. Moreover, ACM showed a wider pH range for the maximum adsorption than the parent AC.

  20. Optimization of business processes of a distribution network operator. Evaluation and control; Optimierung der Geschaeftsprozesse von Verteilungsnetzbetreibern. Bewerten und steuern

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Philipp; Katzfey, Joerg [E-Bridge Consulting GmbH, Bonn (Germany)

    2012-09-10

    The assessment of business processes of a distribution network operator is often more cost-oriented. In order to optimize the own processes reasonably a holistic process management has to be used in order to measure the costs incurred, the quality of implementation and the quality of the fulfillment of the planning.

  1. Containerless processing of YBa2Cu3O7-δ superconductors

    International Nuclear Information System (INIS)

    Olive, J.R.; Hofmeister, W.H.; Bayuzick, R.J.; Carro, G.; McHugh, J.P.; Hopkins, R.H.; Vlasse, M.

    1993-01-01

    Containerless processing of YBa 2 Cu 3 O 7-δ was performed using drop tube and aero-acoustic levitation techniques. In drop tube experiments, two solidification microstructures developed which corresponded to the degree of melting. In aero-acoustic levitation experiments, three solidification microstructures developed. One microstructure was the result of incomplete homogenization of the melt. The second was due to slight undercooling into the Y 2 O 3 + liquid region of the phase diagram upon which primary Y 2 O 3 dendrites formed. The third was due to much deeper undercooling. In this case, the primary solidification structure consisted of dendrites of tetragonal 1:2:3 and some other interdendritic phase. Subsequent to solidification processing, these samples were annealed to single phase 1:2:3 with orthorhombic symmetry. SQUID magnetometer measurements indicated a sharp superconducting transition at approximately 85 K. Magnetic J c values, calculated using the Bean critical state model, indicated that the deeply undercooled and annealed samples had critical current densities on the order of 10 4 Acm -2

  2. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    Science.gov (United States)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  3. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  4. Distributed collaborative processing in wireless sensor networks with application to target localization and beamforming

    OpenAIRE

    Béjar Haro, Benjamín

    2013-01-01

    Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of es...

  5. Interfacial characteristics and dielectric properties of Ba0.65Sr0.35TiO3 thin films

    International Nuclear Information System (INIS)

    Quan Zuci; Zhang Baishun; Zhang Tianjin; Zhao Xingzhong; Pan Ruikun; Ma Zhijun; Jiang Juan

    2008-01-01

    Ba 0.65 Sr 0.35 TiO 3 (BST) thin films were deposited on Pt/Ti/SiO 2 /Si substrates by radio frequency magnetron sputtering technique. X-ray photoelectron spectroscopy (XPS) depth profiling data show that each element component of the BST film possesses a uniform distribution from the outermost surface to subsurface, but obvious Ti-rich is present to BST/Pt interface because Ti 4+ cations are partially reduced to form amorphous oxides such as TiO x (x -7 A/cm 2 at 1.23 V and lower than 5.66 x 10 -6 A/cm 2 at 2.05 V as well as breakdown strength is above 3.01 x 10 5 V/cm

  6. Numerical simulation of the laser welding process for the prediction of temperature distribution on welded aluminium aircraft components

    Science.gov (United States)

    Tsirkas, S. A.

    2018-03-01

    The present investigation is focused to the modelling of the temperature field in aluminium aircraft components welded by a CO2 laser. A three-dimensional finite element model has been developed to simulate the laser welding process and predict the temperature distribution in T-joint laser welded plates with fillet material. The simulation of the laser beam welding process was performed using a nonlinear heat transfer analysis, based on a keyhole formation model analysis. The model employs the technique of element ;birth and death; in order to simulate the weld fillet. Various phenomena associated with welding like temperature dependent material properties and heat losses through convection and radiation were accounted for in the model. The materials considered were 6056-T78 and 6013-T4 aluminium alloys, commonly used for aircraft components. The temperature distribution during laser welding process has been calculated numerically and validated by experimental measurements on different locations of the welded structure. The numerical results are in good agreement with the experimental measurements.

  7. Superclusters and hadronic multiplicity distributions

    International Nuclear Information System (INIS)

    Shih, C.C.; Carruthers, P.

    1986-01-01

    The multiplicity distribution is expressed in terms of supercluster production in hadronic processes at high energy. This process creates unstable clusters at intermediate stages and hadrons in final stage. It includes Poisson-transform distributions (with the partially coherent distribution as a special case) and is very flexible for phenomenological analyses. The associated Koba, Nielson, and Olesen limit and the behavior of cumulant moments are analyzed in detail for finite and/or infinite cluster size and particle size per cluster. In general, a supercluster distribution does not need to be equivalent to a negative binomial distribution to fit experimental data well. Furthermore, the requirement of such equivalence leads to many solutions, in which the average size of the cluster is not logarithmic: e.g., it may show a power behavior instead. Superclustering is defined as a two-or multi-stage process underlying observed global multiplicity distributions. At the first stage of the production process, individual clusters are produced according to a given statistical law. For example, the clustering distribution may be described by partially coherent (oreven sub-Poissonian distribution models. At the second stage, the clusters are considered as the sources of particle production. The corresponding distribution may then be as general as the clustering distribution just mentioned. 8 refs

  8. Distributed Parallel Architecture for "Big Data"

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2012-01-01

    Full Text Available This paper is an extension to the "Distributed Parallel Architecture for Storing and Processing Large Datasets" paper presented at the WSEAS SEPADS’12 conference in Cambridge. In its original version the paper went over the benefits of using a distributed parallel architecture to store and process large datasets. This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. It provides a survey on current distributed and parallel data processing technologies and, based on them, will propose an architecture that can be used to solve the analyzed problem. In this version there is more emphasis put on distributed files systems and the ETL processes involved in a distributed environment.

  9. Separating electroweak and strong interactions in Drell-Yan processes at LHC: leptons angular distributions and reference frames

    International Nuclear Information System (INIS)

    Richter-Was, E.; Was, Z.

    2016-01-01

    Among the physics goals of LHC experiments, precision tests of the Standard Model in the Strong and Electroweak sectors play an important role. Because of nature of the proton-proton processes, observables based on the measurement of the direction and energy of leptons provide the most precise signatures. In the present paper, we concentrate on the angular distribution of Drell-Yan process leptons, in the lepton-pair rest-frame. The vector nature of the intermediate state imposes that distributions are to a good precision described by spherical polynomials of at most second order. We show that with the proper choice of the coordinate frames, only one coefficient in this polynomial decomposition remains sizable, even in the presence of one or two high p T jets. The necessary stochastic choice of the frames relies on probabilities independent from any coupling constants. This remains true when one or two partons accompany the lepton pairs. In this way electroweak effects can be better separated from strong interaction ones for the benefit of the interpretation of the measurements. Our study exploits properties of single gluon emission matrix elements which are clearly visible if a conveniently chosen form of their representation is used. We rely also on distributions obtained from matrix element based Monte Carlo generated samples of events with two leptons and up to two additional partons in test samples. Incoming colliding protons' partons are distributed accordingly to PDFs and are strictly collinear to the corresponding beams. (orig.)

  10. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution

    NARCIS (Netherlands)

    Colen, H.B.B.; Neef, C.; Schuring, R.W.

    2003-01-01

    Background: Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incidents reports involving medication errors. Medisch

  11. Calpionellid distribution and microfacies across the Jurassic/ Cretaceous boundary in western Cuba (Sierra de los Órganos)

    Science.gov (United States)

    López-Martínez, Rafael; Barragán, Ricardo; Reháková, Daniela; Cobiella-Reguera, Jorge Luis

    2013-06-01

    A detailed bed-by-bed sampled stratigraphic section of the Guasasa Formation in the Rancho San Vicente area of the "Sierra de los Órganos", western Cuba, provides well-supported evidence about facies and calpionellid distribution across the Jurassic/Cretaceous boundary. These new data allowed the definition of an updated and sound calpionellid biozonation scheme for the section. In this scheme, the drowning event of a carbonate platform displayed by the facies of the San Vicente Member, the lowermost unit of the section, is dated as Late Tithonian, Boneti Subzone. The Jurassic/Cretaceous boundary was recognized within the facies of the overlying El Americano Member on the basis of the acme of Calpionella alpina Lorenz. The boundary is placed nearly six meters above the contact between the San Vicente and the El Americano Members, in a facies linked to a sea-level drop. The recorded calpionellid bioevents should allow correlations of the Cuban biozonation scheme herein proposed, with other previously published schemes from distant areas of the Tethyan Domain.

  12. Seasonal and spatial evolution of trihalomethanes in a drinking water distribution system according to the treatment process.

    Science.gov (United States)

    Domínguez-Tello, A; Arias-Borrego, A; García-Barrera, Tamara; Gómez-Ariza, J L

    2015-11-01

    This paper comparatively shows the influence of four water treatment processes on the formation of trihalomethanes (THMs) in a water distribution system. The study was performed from February 2005 to January 2012 with analytical data of 600 samples taken in Aljaraque water treatment plant (WTP) and 16 locations along the water distribution system (WDS) in the region of Andévalo and the coast of Huelva (southwest Spain), a region with significant seasonal and population changes. The comparison of results in the four different processes studied indicated a clear link of the treatment process with the formation of THM along the WDS. The most effective treatment process is preozonation and activated carbon filtration (P3), which is also the most stable under summer temperatures. Experiments also show low levels of THMs with the conventional process of preoxidation with potassium permanganate (P4), delaying the chlorination to the end of the WTP; however, this simple and economical treatment process is less effective and less stable than P3. In this study, strong seasonal variations were obtained (increase of THM from winter to summer of 1.17 to 1.85 times) and a strong spatial variation (1.1 to 1.7 times from WTP to end points of WDS) which largely depends on the treatment process applied. There was also a strong correlation between THM levels and water temperature, contact time and pH. On the other hand, it was found that THM formation is not proportional to the applied chlorine dose in the treatment process, but there is a direct relationship with the accumulated dose of chlorine. Finally, predictive models based on multiple linear regressions are proposed for each treatment process.

  13. Assignment of probability distributions for parameters in the 1996 performance assessment for the Waste Isolation Pilot Plant. Part 1: description of process

    International Nuclear Information System (INIS)

    Rechard, Rob P.; Tierney, Martin S.

    2005-01-01

    A managed process was used to consistently and traceably develop probability distributions for parameters representing epistemic uncertainty in four preliminary and the final 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The key to the success of the process was the use of a three-member team consisting of a Parameter Task Leader, PA Analyst, and Subject Matter Expert. This team, in turn, relied upon a series of guidelines for selecting distribution types. The primary function of the guidelines was not to constrain the actual process of developing a parameter distribution but rather to establish a series of well-defined steps where recognized methods would be consistently applied to all parameters. An important guideline was to use a small set of distributions satisfying the maximum entropy formalism. Another important guideline was the consistent use of the log transform for parameters with large ranges (i.e. maximum/minimum>10 3 ). A parameter development team assigned 67 probability density functions (PDFs) in the 1989 PA and 236 PDFs in the 1996 PA using these and other guidelines described

  14. A novel method for determination of particle size distribution in-process

    Science.gov (United States)

    Salaoru, Tiberiu A.; Li, Mingzhong; Wilkinson, Derek

    2009-07-01

    The pharmaceutical and fine chemicals industries are strongly concerned with the manufacture of high value-added speciality products, often in solid form. On-line measurement of solid particle size is vital for reliable control of product properties. The established techniques, such as laser diffraction or spectral extinction, require dilution of the process suspension when measuring from typical manufacturing streams because of their high concentration. Dilution to facilitate measurement can result in changes of both size and form of particles, especially during production processes such as crystallisation. In spectral extinction, the degree of light scattering and absorption by a suspension is measured. However, for concentrated suspensions the interpretation of light extinction measurements is difficult because of multiple scattering and inter-particle interaction effects and at higher concentrations extinction is essentially total so the technique can no longer be applied. At the same time, scattering by a dispersion also causes a change of phase which affects the real component of the suspension's effective refractive index which is a function of particle size and particle and dispersant refractive indices. In this work, a novel prototype instrument has been developed to measure particle size distribution in concentrated suspensions in-process by measuring suspension refractive index at incidence angles near the onset of total internal reflection. Using this technique, the light beam does not pass through the suspension being measured so suspension turbidity does not impair the measurement.

  15. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  16. Influence of miscibility phenomenon on crystalline polymorph transition in poly(vinylidene fluoride)/acrylic rubber/clay nanocomposite hybrid.

    Science.gov (United States)

    Abolhasani, Mohammad Mahdi; Naebe, Minoo; Jalali-Arani, Azam; Guo, Qipeng

    2014-01-01

    In this paper, intercalation of nanoclay in the miscible polymer blend of poly(vinylidene fluoride) (PVDF) and acrylic rubber(ACM) was studied. X-ray diffraction was used to investigate the formation of nanoscale polymer blend/clay hybrid. Infrared spectroscopy and X-ray analysis revealed the coexistence of β and γ crystalline forms in PVDF/Clay nanocomposite while α crystalline form was found to be dominant in PVDF/ACM/Clay miscible hybrids. Flory-Huggins interaction parameter (B) was used to further explain the miscibility phenomenon observed. The B parameter was determined by combining the melting point depression and the binary interaction model. The estimated B values for the ternary PVDF/ACM/Clay and PVDF/ACM pairs were all negative, showing both proper intercalation of the polymer melt into the nanoclay galleries and the good miscibility of PVDF and ACM blend. The B value for the PVDF/ACM blend was almost the same as that measured for the PVDF/ACM/Clay hybrid, suggesting that PVDF chains in nanocomposite hybrids interact with ACM chains and that nanoclay in hybrid systems is wrapped by ACM molecules.

  17. Distributed temperature and distributed acoustic sensing for remote and harsh environments

    Science.gov (United States)

    Mondanos, Michael; Parker, Tom; Milne, Craig H.; Yeo, Jackson; Coleman, Thomas; Farhadiroushan, Mahmoud

    2015-05-01

    Advances in opto-electronics and associated signal processing have enabled the development of Distributed Acoustic and Temperature Sensors. Unlike systems relying on discrete optical sensors a distributed system does not rely upon manufactured sensors but utilises passive custom optical fibre cables resistant to harsh environments, including high temperature applications (600°C). The principle of distributed sensing is well known from the distributed temperature sensor (DTS) which uses the interaction of the source light with thermal vibrations (Raman scattering) to determine the temperature at all points along the fibre. Distributed Acoustic Sensing (DAS) uses a novel digital optical detection technique to precisely capture the true full acoustic field (amplitude, frequency and phase) over a wide dynamic range at every point simultaneously. A number of signal processing techniques have been developed to process a large array of acoustic signals to quantify the coherent temporal and spatial characteristics of the acoustic waves. Predominantly these systems have been developed for the oil and gas industry to assist reservoir engineers in optimising the well lifetime. Nowadays these systems find a wide variety of applications as integrity monitoring tools in process vessels, storage tanks and piping systems offering the operator tools to schedule maintenance programs and maximize service life.

  18. Conditions for the existence of quasi-stationary distributions for birth–death processes with killing

    OpenAIRE

    van Doorn, Erik A.

    2012-01-01

    We consider birth-death processes on the nonnegative integers, where $\\{1,2,...\\}$ is an irreducible class and $0$ an absorbing state, with the additional feature that a transition to state $0$ (killing) may occur from any state. Assuming that absorption at $0$ is certain we are interested in additional conditions on the transition rates for the existence of a quasi-stationary distribution. Inspired by results of M. Kolb and D. Steinsaltz (Quasilimiting behaviour for one-dimensional diffusion...

  19. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    Science.gov (United States)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  20. Guest Editor's Introduction: Special section on dependable distributed systems

    Science.gov (United States)

    Fetzer, Christof

    1999-09-01

    silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non

  1. A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.

    2010-12-01

    High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to

  2. Ketonization of Proline Residues in the Peptide Chains of Actinomycins by a 4-Oxoproline Synthase.

    Science.gov (United States)

    Semsary, Siamak; Crnovčić, Ivana; Driller, Ronja; Vater, Joachim; Loll, Bernhard; Keller, Ullrich

    2018-04-04

    X-type actinomycins (Acms) contain 4-hydroxyproline (Acm X 0 ) or 4-oxoproline (Acm X 2 ) in their β-pentapeptide lactone rings, whereas their α ring contains proline. We demonstrate that these Acms are formed through asymmetric condensation of Acm half molecules (Acm halves) containing proline with 4-hydroxyproline- or 4-oxoproline-containing Acm halves. In turn, we show-using an artificial Acm half analogue (PPL 1) with proline in its peptide chain-their conversion into the 4-hydroxyproline- and 4-oxoproline-containing Acm halves, PPL 0 and PPL 2, in mycelial suspensions of Streptomyces antibioticus. Two responsible genes of the Acm X biosynthetic gene cluster of S. antibioticus, saacmM and saacmN, encoding a cytochrome P450 monooxygenase (Cyp) and a ferredoxin were identified. After coexpression in Escherichia coli, their gene products converted PPL 1 into PPL 0 and PPL 2 in vivo as well as in situ in permeabilized cell of the transformed E. coli strain in conjunction with the host-encoded ferredoxin reductase in a NADH (NADPH)-dependent manner. saAcmM has high sequence similarity to the Cyp107Z (Ema) family of Cyps, which can convert avermectin B1 into its keto derivative, 4''-oxoavermectin B1. Determination of the structure of saAcmM reveals high similarity to the Ema structure but with significant differences in residues decorating their active sites, which defines saAcmM and its orthologues as a distinct new family of peptidylprolineketonizing Cyp. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Preparation and properties of superconducting Bi-Sr-Ca-Cu-O materials by the alkoxide process

    International Nuclear Information System (INIS)

    Uchikawa, Fusaoki; Kobayashi, Toshio; Usami, Ryo; Yoshizaki, Kiyoshi

    1989-01-01

    Homogeneous starting solutions were synthesized using Bi, Sr, Ca and Cu alkoxides. Powders, thick films and gel fibers were prepared respectively by controlling hydrolysis using the same solutions. The synthesized powder had a homogeneous particle size. The fired powder showed a good crystallization property. The thick film coated on MgO substrate using the synthesized sol solution had a smooth surface and a uniformity of each metal elements. The film showed the c-axis orientation and was shown to have the zero resistance temperature of 90 K and the critical current density of 180 A/cm 2 at 77 K. The fiber drawn from the viscous gel solution showed a comparatively large shrinkage with hear treatment. The fired fiber was brittle and had a low strength. It was also found for the fired fiber that zero resistance temperature was 70 K and the critical current density was 90 A/cm 2 at 77 K

  4. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    Science.gov (United States)

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  5. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Stefan Bosse

    2015-02-01

    Full Text Available Multi-agent systems (MAS can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  6. Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes

    Science.gov (United States)

    Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd

    2018-05-01

    In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.

  7. Influences on Distribution of Solute Atoms in Cu-8Fe Alloy Solidification Process Under Rotating Magnetic Field

    Science.gov (United States)

    Zou, Jin; Zhai, Qi-Jie; Liu, Fang-Yu; Liu, Ke-Ming; Lu, De-Ping

    2018-05-01

    A rotating magnetic field (RMF) was applied in the solidification process of Cu-8Fe alloy. Focus on the mechanism of RMF on the solid solution Fe(Cu) atoms in Cu-8Fe alloy, the influences of RMF on solidification structure, solute distribution, and material properties were discussed. Results show that the solidification behavior of Cu-Fe alloy have influenced through the change of temperature and solute fields in the presence of an applied RMF. The Fe dendrites were refined and transformed to rosettes or spherical grains under forced convection. The solute distribution in Cu-rich phase and Fe-rich phase were changed because of the variation of the supercooling degree and the solidification rate. Further, the variation in solute distribution was impacted the strengthening mechanism and conductive mechanism of the material.

  8. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  9. Distributed computing strategies for processing of FT-ICR MS imaging datasets for continuous mode data visualization

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco; Kilic, Mehmet; Heeren, Ronald M.

    2015-03-01

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, but requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.

  10. Distributed sensor networks

    CERN Document Server

    Rubin, Donald B; Carlin, John B; Iyengar, S Sitharama; Brooks, Richard R; University, Clemson

    2014-01-01

    An Overview, S.S. Iyengar, Ankit Tandon, and R.R. BrooksMicrosensor Applications, David ShepherdA Taxonomy of Distributed Sensor Networks, Shivakumar Sastry and S.S. IyengarContrast with Traditional Systems, R.R. BrooksDigital Signal Processing Background, Yu Hen HuImage-Processing Background Lynne Grewe and Ben ShahshahaniObject Detection and Classification, Akbar M. SayeedParameter Estimation David FriedlanderTarget Tracking with Self-Organizing Distributed Sensors R.R. Brooks, C. Griffin, D.S. Friedlander, and J.D. KochCollaborative Signal and Information Processing: AnInformation-Directed Approach Feng Zhao, Jie Liu, Juan Liu, Leonidas Guibas, and James ReichEnvironmental Effects, David C. SwansonDetecting and Counteracting Atmospheric Effects Lynne L. GreweSignal Processing and Propagation for Aeroacoustic Sensor Networks, Richard J. Kozick, Brian M. Sadler, and D. Keith WilsonDistributed Multi-Target Detection in Sensor Networks Xiaoling Wang, Hairong Qi, and Steve BeckFoundations of Data Fusion f...

  11. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; The ATLAS collaboration; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows the sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain confugurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the...

  12. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of Ath...

  13. A Logic for Reasoning About Time-Dependent Access Control Policies

    Science.gov (United States)

    2008-05-20

    its purpose, it is a rather coarse approximation of the behavior desired in general. It is likely that Alice wants the credential C to allow Bob...Jacob Strauss, Robert Morris, and M. Frans Kaashoek. Alpaca : Extensible authorization for distributed services. In Proceedings of the 14th ACM Conference

  14. MALDI imaging facilitates new topical drug development process by determining quantitative skin distribution profiles.

    Science.gov (United States)

    Bonnel, David; Legouffe, Raphaël; Eriksson, André H; Mortensen, Rasmus W; Pamelard, Fabien; Stauber, Jonathan; Nielsen, Kim T

    2018-04-01

    Generation of skin distribution profiles and reliable determination of drug molecule concentration in the target region are crucial during the development process of topical products for treatment of skin diseases like psoriasis and atopic dermatitis. Imaging techniques like mass spectrometric imaging (MSI) offer sufficient spatial resolution to generate meaningful distribution profiles of a drug molecule across a skin section. In this study, we use matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI) to generate quantitative skin distribution profiles based on tissue extinction coefficient (TEC) determinations of four different molecules in cross sections of human skin explants after topical administration. The four drug molecules: roflumilast, tofacitinib, ruxolitinib, and LEO 29102 have different physicochemical properties. In addition, tofacitinib was administrated in two different formulations. The study reveals that with MALDI-MSI, we were able to observe differences in penetration profiles for both the four drug molecules and the two formulations and thereby demonstrate its applicability as a screening tool when developing a topical drug product. Furthermore, the study reveals that the sensitivity of the MALDI-MSI techniques appears to be inversely correlated to the drug molecules' ability to bind to the surrounding tissues, which can be estimated by their Log D values. Graphical abstract.

  15. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  16. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    Directory of Open Access Journals (Sweden)

    Tokareva Victoria

    2018-01-01

    Full Text Available New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS. Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  17. Report on the "Secure Vehicular Communications: Results and Challenges Ahead" Workshop

    OpenAIRE

    Papadimitratos, Panos; Hubaux, Jean-Pierre

    2008-01-01

    © ACM, (2008). This is the author’s version of the work. It is posted here by permission of ACM for your personaluse. Not for redistribution. The definitive version was published in ACM SIGMOBILE Mobile Computing and Communications Review . http://doi.acm.org/10.114510/1394555.1394567 .QC 20110712

  18. Analysis the Transient Process of Wind Power Resources when there are Voltage Sags in Distribution Grid

    Science.gov (United States)

    Nhu Y, Do

    2018-03-01

    Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.

  19. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  20. Influence of Miscibility Phenomenon on Crystalline Polymorph Transition in Poly(Vinylidene Fluoride)/Acrylic Rubber/Clay Nanocomposite Hybrid

    Science.gov (United States)

    Abolhasani, Mohammad Mahdi; Naebe, Minoo; Jalali-Arani, Azam; Guo, Qipeng

    2014-01-01

    In this paper, intercalation of nanoclay in the miscible polymer blend of poly(vinylidene fluoride) (PVDF) and acrylic rubber(ACM) was studied. X-ray diffraction was used to investigate the formation of nanoscale polymer blend/clay hybrid. Infrared spectroscopy and X-ray analysis revealed the coexistence of β and γ crystalline forms in PVDF/Clay nanocomposite while α crystalline form was found to be dominant in PVDF/ACM/Clay miscible hybrids. Flory-Huggins interaction parameter (B) was used to further explain the miscibility phenomenon observed. The B parameter was determined by combining the melting point depression and the binary interaction model. The estimated B values for the ternary PVDF/ACM/Clay and PVDF/ACM pairs were all negative, showing both proper intercalation of the polymer melt into the nanoclay galleries and the good miscibility of PVDF and ACM blend. The B value for the PVDF/ACM blend was almost the same as that measured for the PVDF/ACM/Clay hybrid, suggesting that PVDF chains in nanocomposite hybrids interact with ACM chains and that nanoclay in hybrid systems is wrapped by ACM molecules. PMID:24551141