WorldWideScience

Sample records for distributed processing acm

  1. Distribution of the ACME-arcA gene among meticillin-resistant Staphylococcus haemolyticus and identification of a novel ccr allotype in ACME-arcA-positive isolates.

    Science.gov (United States)

    Pi, Borui; Yu, Meihong; Chen, Yagang; Yu, Yunsong; Li, Lanjuan

    2009-06-01

    The aim of this study was to investigate the prevalence and characteristics of ACME (arginine catabolic mobile element)-arcA-positive isolates among meticillin-resistant Staphylococcus haemolyticus (MRSH). ACME-arcA, native arcA and SCCmec elements were detected by PCR. Susceptibilities to 10 antimicrobial agents were compared between ACME-arcA-positive and -negative isolates by chi-square test. PFGE was used to investigate the clonal relatedness of ACME-arcA-positive isolates. The phylogenetic relationships of ACME-arcA and native arcA were analysed using the neighbour-joining methods of mega software. A total of 42 (47.7 %) of 88 isolates distributed in 13 PFGE types were positive for the ACME-arcA gene. There were no significant differences in antimicrobial susceptibility between ACME-arcA-positive and -negative isolates. A novel ccr allotype (ccrAB(SHP)) was identified in ACME-arcA-positive isolates. Among 42 ACME-arcA-positive isolates: 8 isolates harboured SCCmec V, 8 isolates harboured class C1 mec complex and ccrAB(SHP); 22 isolates harbouring class C1 mec complex and 4 isolates harbouring class C2 mec complex were negative for all known ccr allotypes. The ACME-arcA-positive isolates were first found in MRSH with high prevalence and clonal diversity, which suggests a mobility of ACME within MRSH. The results from this study revealed that MRSH is likely to be one of the potential reservoirs of ACME for Staphylococcus aureus.

  2. Improving simulated spatial distribution of productivity and biomass in Amazon forests using the ACME land model

    Science.gov (United States)

    Yang, X.; Thornton, P. E.; Ricciuto, D. M.; Shi, X.; Xu, M.; Hoffman, F. M.; Norby, R. J.

    2017-12-01

    Tropical forests play a crucial role in the global carbon cycle, accounting for one third of the global NPP and containing about 25% of global vegetation biomass and soil carbon. This is particularly true for tropical forests in the Amazon region, as it comprises approximately 50% of the world's tropical forests. It is therefore important for us to understand and represent the processes that determine the fluxes and storage of carbon in these forests. In this study, we show that the implementation of phosphorus (P) cycle and P limitation in the ACME Land Model (ALM) improves simulated spatial pattern of NPP. The P-enabled ALM is able to capture the west-to-east gradient of productivity, consistent with field observations. We also show that by improving the representation of mortality processes, ALM is able to reproduce the observed spatial pattern of above ground biomass across the Amazon region.

  3. ACME-III and ACME-IV Final Campaign Reports

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    The goals of the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s third and fourth Airborne Carbon Measurements (ACME) field campaigns, ACME-III and ACME-IV, are: 1) to measure and model the exchange of CO2, water vapor, and other greenhouse gases by the natural, agricultural, and industrial ecosystems of the Southern Great Plains (SGP) region; 2) to develop quantitative approaches to relate these local fluxes to the concentration of greenhouse gases measured at the Central Facility tower and in the atmospheric column above the ARM SGP Central Facility, 3) to develop and test bottom-up measurement and modeling approaches to estimate regional scale carbon balances, and 4) to develop and test inverse modeling approaches to estimate regional scale carbon balance and anthropogenic sources over continental regions. Regular soundings of the atmosphere from near the surface into the mid-troposphere are essential for this research.

  4. ACMS-Data

    Data.gov (United States)

    Department of Homeland Security — The Records of CBP training activities in the academies and in-service field training. This data is for processing by COTS Application Acadis Readiness Suite and is...

  5. Process evaluation distributed system

    Science.gov (United States)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  6. Asbestos-Containing Materials (ACM) and Demolition

    Science.gov (United States)

    There are specific federal regulatory requirements that require the identification of asbestos-containing materials (ACM) in many of the residential buildings that are being demolished or renovated by a municipality.

  7. Additive Construction with Mobile Emplacement (ACME)

    Science.gov (United States)

    Vickers, John

    2015-01-01

    The Additive Construction with Mobile Emplacement (ACME) project is developing technology to build structures on planetary surfaces using in-situ resources. The project focuses on the construction of both 2D (landing pads, roads, and structure foundations) and 3D (habitats, garages, radiation shelters, and other structures) infrastructure needs for planetary surface missions. The ACME project seeks to raise the Technology Readiness Level (TRL) of two components needed for planetary surface habitation and exploration: 3D additive construction (e.g., contour crafting), and excavation and handling technologies (to effectively and continuously produce in-situ feedstock). Additionally, the ACME project supports the research and development of new materials for planetary surface construction, with the goal of reducing the amount of material to be launched from Earth.

  8. Extremely distributed media processing

    Science.gov (United States)

    Butera, William; Bove, V. Michael, Jr.; McBride, James

    2001-12-01

    The Object-Based Media Group at the MIT Media Laboratory is developing robust, self-organizing programming models for dense ensembles of ultra-miniaturized computing nodes which are deployed by the thousands in bulk fashion, e.g. embedded into building materials. While such systems potentially offer almost unlimited computation for multimedia purposes, the individual devices contain tiny amounts of memory, lack explicit addresses, have wireless communication ranges only in the range of millimeters to centimeters, and are expected to fail at high rates. An unorthodox approach to handling of multimedia data is required in order to achieve useful, reliable work in such an environment. We describe the hardware and software strategies, and demonstrate several examples showing the processing of images and sound in such a system.

  9. Safe Distribution of Declarative Processes

    DEFF Research Database (Denmark)

    Hildebrandt, Thomas; Mukkamala, Raghava Rao; Slaats, Tijs

    2011-01-01

    . The technique for distribution is based on a new general notion of projection of DCR Graphs relative to a subset of labels and events identifying the set of external events that must be communicated from the other processes in the network in order for the distribution to be safe.We prove that for any vector......We give a general technique for safe distribution of a declarative (global) process as a network of (local) synchronously communicating declarative processes. Both the global and local processes are given as Dynamic Condition Response (DCR) Graphs. DCR Graphs is a recently introduced declarative...... process model generalizing labelled prime event structures to a systems model able to finitely represent ω-regular languages. An operational semantics given as a transition semantics between markings of the graph allows DCR Graphs to be conveniently used as both specification and execution model...

  10. The Distributed Processing Library (DPL)

    Science.gov (United States)

    Allan, D. J.

    The Distributed Processing Library (DPL) provides multiple processing services across heterogeneous collections of UNIX workstations for the ASTERIX data analysis package. The DPL programmer provides a worker task to perform the units of parallel computation, and writes the flow control logic in the client using DPL to manage queues of jobs on multiple workers. DPL conceals the interprocess communication from the client and worker processes allowing existing sequential algorithms to be adapted easily. The system has been tested on a mixture of machines running Solaris and OSF, and has shown that the library is useful for units of computation taking as little as 50 milliseconds.

  11. Representation of deforestation impacts on climate, water, and nutrient cycles in the ACME earth system model

    Science.gov (United States)

    Cai, X.; Riley, W. J.; Zhu, Q.

    2017-12-01

    Deforestation causes a series of changes to the climate, water, and nutrient cycles. Employing a state-of-the-art earth system model—ACME (Accelerated Climate Modeling for Energy), we comprehensively investigate the impacts of deforestation on these processes. We first assess the performance of the ACME Land Model (ALM) in simulating runoff, evapotranspiration, albedo, and plant productivity at 42 FLUXNET sites. The single column mode of ACME is then used to examine climate effects (temperature cooling/warming) and responses of runoff, evapotranspiration, and nutrient fluxes to deforestation. This approach separates local effects of deforestation from global circulation effects. To better understand the deforestation effects in a global context, we use the coupled (atmosphere, land, and slab ocean) mode of ACME to demonstrate the impacts of deforestation on global climate, water, and nutrient fluxes. Preliminary results showed that the land component of ACME has advantages in simulating these processes and that local deforestation has potentially large impacts on runoff and atmospheric processes.

  12. ACM CCS 2013-2015 Student Travel Support

    Science.gov (United States)

    2016-10-29

    Student Metrics This section only applies to graduating undergraduates supported by this agreement in this reporting period The number of undergraduates...ACM CCS 2013-2015 Student Travel Support Under the ARO funded effort titled “ACM CCS 2013-2015 Student Travel Support,” from 2013 to 2015, George...Mason University awarded 10 students travel awards every year. These grants enabled the students to offset the cost to attend the ACM Conference on

  13. AcmD, a homolog of the major autolysin AcmA of Lactococcus lactis, binds to the cell wall and contributes to cell separation and autolysis

    NARCIS (Netherlands)

    Visweswaran, Ganesh Ram R; Steen, Anton; Leenhouts, Kees; Szeliga, Monika; Ruban, Beata; Hesseling-Meinders, Anne; Dijkstra, Bauke W; Kuipers, Oscar P; Kok, Jan; Buist, Girbe

    2013-01-01

    Lactococcus lactis expresses the homologous glucosaminidases AcmB, AcmC, AcmA and AcmD. The latter two have three C-terminal LysM repeats for peptidoglycan binding. AcmD has much shorter intervening sequences separating the LysM repeats and a lower iso-electric point (4.3) than AcmA (10.3). Under

  14. Process of random distributions : classification and prediction ...

    African Journals Online (AJOL)

    Dirichlet random distribution. The parameter of this process can be the distribution of any usual such as the (multifractional) Brownian motion. We also extend Kraft random distribution to the continuous time case. We give an application in ...

  15. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...

  16. Web-Based Distributed XML Query Processing

    NARCIS (Netherlands)

    Smiljanic, M.; Feng, L.; Jonker, Willem; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Web-based distributed XML query processing has gained in importance in recent years due to the widespread popularity of XML on the Web. Unlike centralized and tightly coupled distributed systems, Web-based distributed database systems are highly unpredictable and uncontrollable, with a rather

  17. Pomegranate MR images analysis using ACM and FCM algorithms

    Science.gov (United States)

    Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.

  18. On Distributed Port-Hamiltonian Process Systems

    NARCIS (Netherlands)

    Lopezlena, Ricardo; Scherpen, Jacquelien M.A.

    2004-01-01

    In this paper we use the term distributed port-Hamiltonian Process Systems (DPHPS) to refer to the result of merging the theory of distributed Port-Hamiltonian systems (DPHS) with the theory of process systems (PS). Such concept is useful for combining the systematic interconnection of PHS with the

  19. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This paper reviews useful results related to Palm distributions of spatial point processes and provides a new result regarding the characterization of Palm distributions for the class of log Gaussian Cox processes. This result is used to study functional summary statistics for a log Gaussian Cox...

  20. Optimizing the Advanced Ceramic Material (ACM) for Diesel Particulate Filter Applications

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Heather E.; Stewart, Mark L.; Maupin, Gary D.; Gallant, Thomas R.; Li, Cheng; Mao, Frank H.; Pyzik, Aleksander J.; Ramanathan, Ravi

    2006-10-02

    This paper describes the application of pore-scale filtration simulations to the ‘Advanced Ceramic Material’ (ACM) developed by Dow Automotive for use in advanced diesel particulate filters. The application required the generation of a three dimensional substrate geometry to provide the boundary conditions for the flow model. An innovative stochastic modeling technique was applied matching chord length distribution and the porosity profile of the material. Additional experimental validation was provided by the single channel experimental apparatus. Results show that the stochastic reconstruction techniques provide flexibility and appropriate accuracy for the modeling efforts. Early optimization efforts imply that needle length may provide a mechanism for adjusting performance of the ACM for DPF applications. New techniques have been developed to visualize soot deposition in both traditional and new DPF substrate materials. Loading experiments have been conducted on a variety of single channel DPF substrates to develop a deeper understanding of soot penetration, soot deposition characteristics, and to confirm modeling results.

  1. ARM Airborne Carbon Measurements VI (ACME VI) Science Plan

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S [Lawrence Berkeley National Laboratory

    2015-12-01

    From October 1 through September 30, 2016, the Atmospheric Radiation Measurement (ARM) Aerial Facility will deploy the Cessna 206 aircraft over the Southern Great Plains (SGP) site, collecting observations of trace-gas mixing ratios over the ARM’s SGP facility. The aircraft payload includes two Atmospheric Observing Systems, Inc., analyzers for continuous measurements of CO2 and a 12-flask sampler for analysis of carbon cycle gases (CO2, CO, CH4, N2O, 13CO2, 14CO2, carbonyl sulfide, and trace hydrocarbon species, including ethane). The aircraft payload also includes instrumentation for solar/infrared radiation measurements. This research is supported by the U.S. Department of Energy’s ARM Climate Research Facility and Terrestrial Ecosystem Science Program and builds upon previous ARM Airborne Carbon Measurements (ARM-ACME) missions. The goal of these measurements is to improve understanding of 1) the carbon exchange at the SGP site, 2) how CO2 and associated water and energy fluxes influence radiative forcing, convective processes and CO2 concentrations over the SGP site, and 3) how greenhouse gases are transported on continental scales.

  2. Formation of personality’s acme-qualities as a component of physical education specialists’ acmeological competence

    Directory of Open Access Journals (Sweden)

    T.Hr. Dereka

    2016-10-01

    Full Text Available Purpose: to determine characteristics of acme-qualities’ formation in physical education specialists and determine correlations between components. Material: in the research students of “Physical education” specialty (n=194 participated. For assessment personality’s qualities special tests were used. Organization abilities, communicative abilities, creative potential, demand in achievement, emotional information level, control of emotions and etc. were assessed. Results: we determined components of personality’s acme-competence component in physical education specialists. We found density and orientation of correlation and influence of acme-qualities on personality’s component. By the results of factorial analysis we grouped, classified components by four factors and created their visual picture. The accumulated percentage of the studied factors’ dispersion was determined. Conclusions: continuous professional training of physical education specialists on acme-principles resulted in formation of personality’s acme-qualities. They facilitate manifestation of personality’s activity in the process of professional formation and constant self-perfection.

  3. Apache Flink: Distributed Stream Data Processing

    CERN Document Server

    Jacobs, Kevin; CERN. Geneva. IT Department

    2016-01-01

    The amount of data is growing significantly over the past few years. Therefore, the need for distributed data processing frameworks is growing. Currently, there are two well-known data processing frameworks with an API for data batches and an API for data streams which are named Apache Flink and Apache Spark. Both Apache Spark and Apache Flink are improving upon the MapReduce implementation of the Apache Hadoop framework. MapReduce is the first programming model for distributed processing on large scale that is available in Apache Hadoop. This report compares the Stream API and the Batch API for both frameworks.

  4. Agents-based distributed processes control systems

    Directory of Open Access Journals (Sweden)

    Adrian Gligor

    2011-12-01

    Full Text Available Large industrial distributed systems have revealed a remarkable development in recent years. We may note an increase of their structural and functional complexity, at the same time with those on requirements side. These are some reasons why there are involvednumerous researches, energy and resources to solve problems related to these types of systems. The paper addresses the issue of industrial distributed systems with special attention being given to the distributed industrial processes control systems. A solution for a distributed process control system based on mobile intelligent agents is presented.The main objective of the proposed system is to provide an optimal solution in terms of costs, maintenance, reliability and flexibility. The paper focuses on requirements, architecture, functionality and advantages brought by the proposed solution.

  5. Distributed processing in integrated data preparation flow

    Science.gov (United States)

    Schulze, Steffen F.; Bailey, George E.

    2004-12-01

    The era of week-long turn around times (TAT) and half-terabyte databases is at hand as seen by the initial 90 nm production nodes. A quadrupling of TAT and database volumes for the subsequent nodes is considered to be a conservative estimate of the expected growth by most mask data preparation (MDP) groups, so how will fabs and mask manufacturers address this data explosion with a minimal impact to cost? The solution is a multi-tiered approach of hardware and software. By shifting from costly Unix servers to cheaper Linux clusters, MDP departments can add hundreds to thousands of CPU"s at a fraction of the cost. This hardware change will require the corresponding shift from multithreaded (MT) to distributed-processing tools or even a heterogeneous configuration of both. Can the EDA market develop the distributed-processing tools to support the era of data explosion? This paper will review the progression and performance (run time and scalability) of the distributed-processing MDP tools (DRC, OPC, fracture) along with the impact to the hierarchy preservation. It will consider the advantages of heterogeneous processing over homogenous. In addition, it will provide insight to potential non-scalable overhead components that could eventually exist in a distributed configuration. Lastly, it will demonstrate the cost of ownership aspect of the Unix and Linux platforms with respect to targeting TAT.

  6. News from the Library: A one-stop-shop for computing literature: ACM Digital Library

    CERN Multimedia

    CERN Library

    2011-01-01

    The Association for Computing Machinery, ACM, is the world’s largest educational and scientific computing society. Among others, the ACM provides the computing field's premier Digital Library and serves its members and the computing profession with leading-edge publications, conferences, and career resources.   ACM Digital Library is available to the CERN community. The most popular journal here at CERN is Communications of the ACM. However, the collection offers access to a series of other valuable important academic journals such as Journal of the ACM and even fulltext of a series of classical books. In addition, users have access to the ACM Guide to Computing Literature, the most comprehensive bibliographic database focusing on computing, integrated with ACM’s full-text articles and including features such as ACM Author Profile Pages - which provides bibliographic and bibliometric data for over 1,000,000 authors in the field. ACM Digital Library is an excellent com...

  7. Fouling distribution in forward osmosis membrane process.

    Science.gov (United States)

    Lee, Junseok; Kim, Bongchul; Hong, Seungkwan

    2014-06-01

    Fouling behavior along the length of membrane module was systematically investigated by performing simple modeling and lab-scale experiments of forward osmosis (FO) membrane process. The flux distribution model developed in this study showed a good agreement with experimental results, validating the robustness of the model. This model demonstrated, as expected, that the permeate flux decreased along the membrane channel due to decreasing osmotic pressure differential across the FO membrane. A series of fouling experiments were conducted under the draw and feed solutions at various recoveries simulated by the model. The simulated fouling experiments revealed that higher organic (alginate) fouling and thus more flux decline were observed at the last section of a membrane channel, as foulants in feed solution became more concentrated. Furthermore, the water flux in FO process declined more severely as the recovery increased due to more foulants transported to membrane surface with elevated solute concentrations at higher recovery, which created favorable solution environments for organic adsorption. The fouling reversibility also decreased at the last section of the membrane channel, suggesting that fouling distribution on FO membrane along the module should be carefully examined to improve overall cleaning efficiency. Lastly, it was found that such fouling distribution observed with co-current flow operation became less pronounced in counter-current flow operation of FO membrane process. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  8. Distributed Aerodynamic Sensing and Processing Toolbox

    Science.gov (United States)

    Brenner, Martin; Jutte, Christine; Mangalam, Arun

    2011-01-01

    A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.

  9. Highlights from ACM SIGSPATIAL GIS 2011: the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems: (Chicago, Illinois - November 1 - 4, 2011)

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Ofek, Eyal; Tanin, Egemen

    2012-01-01

    ACM SIGSPATIAL GIS 2011 was the 19th gathering of the premier event on spatial information and Geographic Information Systems (GIS). It is also the fourth year that the conference was held under the auspices of ACM's most recent special interest group, SIGSPATIAL. Since its start in 1993, the con...

  10. Distributed data processing for public health surveillance

    Directory of Open Access Journals (Sweden)

    Yih Katherine

    2006-09-01

    Full Text Available Abstract Background Many systems for routine public health surveillance rely on centralized collection of potentially identifiable, individual, identifiable personal health information (PHI records. Although individual, identifiable patient records are essential for conditions for which there is mandated reporting, such as tuberculosis or sexually transmitted diseases, they are not routinely required for effective syndromic surveillance. Public concern about the routine collection of large quantities of PHI to support non-traditional public health functions may make alternative surveillance methods that do not rely on centralized identifiable PHI databases increasingly desirable. Methods The National Bioterrorism Syndromic Surveillance Demonstration Program (NDP is an example of one alternative model. All PHI in this system is initially processed within the secured infrastructure of the health care provider that collects and holds the data, using uniform software distributed and supported by the NDP. Only highly aggregated count data is transferred to the datacenter for statistical processing and display. Results Detailed, patient level information is readily available to the health care provider to elucidate signals observed in the aggregated data, or for ad hoc queries. We briefly describe the benefits and disadvantages associated with this distributed processing model for routine automated syndromic surveillance. Conclusion For well-defined surveillance requirements, the model can be successfully deployed with very low risk of inadvertent disclosure of PHI – a feature that may make participation in surveillance systems more feasible for organizations and more appealing to the individuals whose PHI they hold. It is possible to design and implement distributed systems to support non-routine public health needs if required.

  11. A tutorial on Palm distributions for spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...... discuss some examples of Palm distributions for specific models and some applications....

  12. AVIRIS and TIMS data processing and distribution at the land processes distributed active archive center

    Science.gov (United States)

    Mah, G. R.; Myers, J.

    1993-01-01

    The U.S. Government has initiated the Global Change Research program, a systematic study of the Earth as a complete system. NASA's contribution of the Global Change Research Program is the Earth Observing System (EOS), a series of orbital sensor platforms and an associated data processing and distribution system. The EOS Data and Information System (EOSDIS) is the archiving, production, and distribution system for data collected by the EOS space segment and uses a multilayer architecture for processing, archiving, and distributing EOS data. The first layer consists of the spacecraft ground stations and processing facilities that receive the raw data from the orbiting platforms and then separate the data by individual sensors. The second layer consists of Distributed Active Archive Centers (DAAC) that process, distribute, and archive the sensor data. The third layer consists of a user science processing network. The EOSDIS is being developed in a phased implementation. The initial phase, Version 0, is a prototype of the operational system. Version 0 activities are based upon existing systems and are designed to provide an EOSDIS-like capability for information management and distribution. An important science support task is the creation of simulated data sets for EOS instruments from precursor aircraft or satellite data. The Land Processes DAAC, at the EROS Data Center (EDC), is responsible for archiving and processing EOS precursor data from airborne instruments such as the Thermal Infrared Multispectral Scanner (TIMS), the Thematic Mapper Simulator (TMS), and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS). AVIRIS, TIMS, and TMS are flown by the NASA-Ames Research Center ARC) on an ER-2. The ER-2 flies at 65000 feet and can carry up to three sensors simultaneously. Most jointly collected data sets are somewhat boresighted and roughly registered. The instrument data are being used to construct data sets that simulate the spectral and spatial

  13. Distributed resource management across process boundaries

    KAUST Repository

    Suresh, Lalith

    2017-09-27

    Multi-tenant distributed systems composed of small services, such as Service-oriented Architectures (SOAs) and Micro-services, raise new challenges in attaining high performance and efficient resource utilization. In these systems, a request execution spans tens to thousands of processes, and the execution paths and resource demands on different services are generally not known when a request first enters the system. In this paper, we highlight the fundamental challenges of regulating load and scheduling in SOAs while meeting end-to-end performance objectives on metrics of concern to both tenants and operators. We design Wisp, a framework for building SOAs that transparently adapts rate limiters and request schedulers system-wide according to operator policies to satisfy end-to-end goals while responding to changing system conditions. In evaluations against production as well as synthetic workloads, Wisp successfully enforces a range of end-to-end performance objectives, such as reducing average latencies, meeting deadlines, providing fairness and isolation, and avoiding system overload.

  14. Parallel and distributed processing: applications to power systems

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Felix; Murphy, Liam [California Univ., Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1994-12-31

    Applications of parallel and distributed processing to power systems problems are still in the early stages. Rapid progress in computing and communications promises a revolutionary increase in the capacity of distributed processing systems. In this paper, the state-of-the art in distributed processing technology and applications is reviewed and future trends are discussed. (author) 14 refs.,1 tab.

  15. Insights from Modeling the Integrated Climate, Biogeochemical Cycles, Human Activities and Their Interactions in the ACME Earth System Model

    Science.gov (United States)

    Leung, L. R.; Thornton, P. E.; Riley, W. J.; Calvin, K. V.

    2017-12-01

    Towards the goal of understanding the contributions from natural and managed systems to current and future greenhouse gas fluxes and carbon-climate and carbon-CO2 feedbacks, efforts have been underway to improve representations of the terrestrial, river, and human components of the ACME earth system model. Broadly, our efforts include implementation and comparison of approaches to represent the nutrient cycles and nutrient limitations on ecosystem production, extending the river transport model to represent sediment and riverine biogeochemistry, and coupling of human systems such as irrigation, reservoir operations, and energy and land use with the ACME land and river components. Numerical experiments have been designed to understand how terrestrial carbon, nitrogen, and phosphorus cycles regulate climate system feedbacks and the sensitivity of the feedbacks to different model treatments, examine key processes governing sediment and biogeochemistry in the rivers and their role in the carbon cycle, and exploring the impacts of human systems in perturbing the hydrological and carbon cycles and their interactions. This presentation will briefly introduce the ACME modeling approaches and discuss preliminary results and insights from numerical experiments that lay the foundation for improving understanding of the integrated climate-biogeochemistry-human system.

  16. Study on the percent of frequency of ACME-Arca in clinical isolates ...

    African Journals Online (AJOL)

    ACME is a mobile element of Arginine catabolic in Staphylococcus epidermidis that codes specific virulence factors. The purpose of this study was to examine the specific features and prevalence of ACME-arcA in the isolates of Staphylococcus epidermidis resistant to Methicillin isolated by clinical samples in Isfahan.

  17. Autolysis of Lactococcus lactis caused by induced overproduction of its major autolysin, AcmA

    NARCIS (Netherlands)

    Buist, Girbe; Karsens, H; Nauta, A; van Sinderen, D; Venema, G; Kok, J

    The optical density of a culture of Lactococcus lactis MG1363 was reduced more than 60% during prolonged stationary phase, Reduction in optical density (autolysis) was almost absent in a culture of an isogenic mutant containing a deletion in the major autolysin gene, acmA. An acmA mutant carrying

  18. Distributed architecture and distributed processing mode in urban sewage treatment

    Science.gov (United States)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Decentralized rural sewage treatment facility over the broad area, a larger operation and management difficult, based on the analysis of rural sewage treatment model based on the response to these challenges, we describe the principle, structure and function in networking technology and network communications technology as the core of distributed remote monitoring system, through the application of case analysis to explore remote monitoring system features in a decentralized rural sewage treatment facilities in the daily operation and management. Practice shows that the remote monitoring system to provide technical support for the long-term operation and effective supervision of the facilities, and reduced operating, maintenance and supervision costs for development.

  19. ARM Airborne Carbon Measurements VI (ARM-ACME VI) Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, Sebastien [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-05-01

    From October 1, 2015 through September 30, 2016, AAF deployed a Cessna 206 aircraft over the Southern Great Plains, collecting observations of trace gas mixing ratios over the ARM/SGP Central Facility. The aircraft payload included two Atmospheric Observing Systems (AOS Inc.) analyzers for continuous measurements of CO2, and a 12-flask sampler for analysis of carbon cycle gases (CO2, CO, CH4, N2O, 13CO2). The aircraft payload also includes solar/infrared radiation measurements. This research (supported by DOE ARM and TES programs) builds upon previous ARM-ACME missions. The goal of these measurements is to improve understanding of: (a) the carbon exchange of the ARM region; (b) how CO2 and associated water and energy fluxes influence radiative forcing, convective processes, and CO2 concentrations over the ARM region, and (c) how greenhouse gases are transported on continental scales.

  20. Multivariate semi-logistic distribution and processes | Umar | Journal ...

    African Journals Online (AJOL)

    Multivariate semi-logistic distribution is introduced and studied. Some characterizations properties of multivariate semi-logistic distribution are presented. First order autoregressive minification processes and its generalization to kth order autoregressive minification processes with multivariate semi-logistic distribution as ...

  1. A Novel Observation-Guided Approach for Evaluating Mesoscale Convective Systems Simulated by the DOE ACME Model

    Science.gov (United States)

    Feng, Z.; Ma, P. L.; Hardin, J. C.; Houze, R.

    2017-12-01

    Mesoscale convective systems (MCSs) are the largest type of convective storms that develop when convection aggregates and induces mesoscale circulation features. Over North America, MCSs contribute over 60% of the total warm-season precipitation and over half of the extreme daily precipitation in the central U.S. Our recent study (Feng et al. 2016) found that the observed increases in springtime total and extreme rainfall in this region are dominated by increased frequency and intensity of long-lived MCSs*. To date, global climate models typically do not run at a resolution high enough to explicitly simulate individual convective elements and may not have adequate process representations for MCSs, resulting in a large deficiency in projecting changes of the frequency of extreme precipitation events in future climate. In this study, we developed a novel observation-guided approach specifically designed to evaluate simulated MCSs in the Department of Energy's climate model, Accelerated Climate Modeling for Energy (ACME). The ACME model has advanced treatments for convection and subgrid variability and for this study is run at 25 km and 100 km grid spacings. We constructed a robust MCS database consisting of over 500 MCSs from 3 warm-season observations by applying a feature-tracking algorithm to 4-km resolution merged geostationary satellite and 3-D NEXRAD radar network data over the Continental US. This high-resolution MCS database is then down-sampled to the 25 and 100 km ACME grids to re-characterize key MCS properties. The feature-tracking algorithm is adapted with the adjusted characteristics to identify MCSs from ACME model simulations. We demonstrate that this new analysis framework is useful for evaluating ACME's warm-season precipitation statistics associated with MCSs, and provides insights into the model process representations related to extreme precipitation events for future improvement. *Feng, Z., L. R. Leung, S. Hagos, R. A. Houze, C. D. Burleyson

  2. A new aerosol collector for quasi on-line analysis of particulate organic matter: the Aerosol Collection Module (ACM and first applications with a GC/MS-FID

    Directory of Open Access Journals (Sweden)

    T. Hohaus

    2010-10-01

    Full Text Available In many environments organic matter significantly contributes to the composition of atmospheric aerosol particles influencing its properties. Detailed chemical characterization of ambient aerosols is critical in order to understand the formation process, composition, and properties of aerosols and facilitates source identification and relative contributions from different types of sources to ambient aerosols in the atmosphere. However, current analytical methods are far from full speciation of organic aerosols and often require sampling times of up to one week. Offline methods are also subjected to artifacts during aerosol collection and storage.

    In the present work a new technique for quasi on-line compound specific measurements of organic aerosol particles was developed. The Aerosol Collection Module (ACM focuses particles into a beam which is directed to a cooled sampling surface. The sampling takes place in a high vacuum environment where the gas phase from the sample volume is removed. After collection is completed volatile and semi-volatile compounds are evaporated from the collection surface through heating and transferred to a detector.

    For laboratory characterization the ACM was interfaced with a Gas Chromatograph Mass Spectrometer, Flame Ionization Detector system (GC/MS-FID, abbreviated as ACM GC-MS. The particle collection efficiency, gas phase transfer efficiency, and linearity of the ACM GC-MS were determined using laboratory generated octadecane aerosols. The ACM GC-MS is linear over the investigated mass range of 10 to 100 ng and a recovery rate of 100% was found for octadecane particles.

    The ACM GC-MS was applied to investigate secondary organic aerosol (SOA formed from β-pinene oxidation. Nopinone, myrtanal, myrtenol, 1-hydroxynopinone, 3-oxonopinone, 3,7-dihydroxynopinone, and bicyclo[3,1,1]hept-3-ene-2-one were found as products in the SOA. The ACM GC-MS results are compared to quartz filter

  3. Signal processing for distributed readout using TESs

    International Nuclear Information System (INIS)

    Smith, Stephen J.; Whitford, Chris H.; Fraser, George W.

    2006-01-01

    We describe optimal filtering algorithms for determining energy and position resolution in position-sensitive Transition Edge Sensor (TES) Distributed Read-Out Imaging Devices (DROIDs). Improved algorithms, developed using a small-signal finite-element model, are based on least-squares minimisation of the total noise power in the correlated dual TES DROID. Through numerical simulations we show that significant improvements in energy and position resolution are theoretically possible over existing methods

  4. Characterization of a Novel Arginine Catabolic Mobile Element (ACME) and Staphylococcal Chromosomal Cassette mec Composite Island with Significant Homology to Staphylococcus epidermidis ACME type II in Methicillin-Resistant Staphylococcus aureus Genotype ST22-MRSA-IV.

    LENUS (Irish Health Repository)

    Shore, Anna C

    2011-02-22

    The arginine catabolic mobile element (ACME) is prevalent among ST8-MRSA-IVa (USA300) isolates and evidence suggests that ACME enhances the ability of ST8-MRSA-IVa to grow and survive on its host. ACME has been identified in a small number of isolates belonging to other MRSA clones but is widespread among coagulase-negative staphylococci (CoNS). This study reports the first description of ACME in two distinct strains of the pandemic ST22-MRSA-IV clone. A total of 238 MRSA isolates recovered in Ireland between 1971 and 2008 were investigated for ACME using a DNA microarray. Twenty-three isolates (9.7%) were ACME-positive, all were either MRSA genotype ST8-MRSA-IVa (7\\/23, 30%) or ST22-MRSA-IV (16\\/23, 70%). Whole-genome sequencing and comprehensive molecular characterization revealed the presence of a novel 46-kb ACME and SCCmec composite island (ACME\\/SCCmec-CI) in ST22-MRSA-IVh isolates (n = 15). This ACME\\/SCCmec-CI consists of a 12-kb DNA region previously identified in ACME type II in S. epidermidis ATCC 12228, a truncated copy of the J1 region of SCCmec I and a complete SCCmec IVh element. The composite island has a novel genetic organization with ACME located within orfX and SCCmec located downstream of ACME. One pvl-positive ST22-MRSA-IVa isolate carried ACME located downstream of SCCmec IVa as previously described in ST8-MRSA-IVa. These results suggest that ACME has been acquired by ST22-MRSA-IV on two independent occasions. At least one of these instances may have involved horizontal transfer and recombination events between MRSA and CoNS. The presence of ACME may enhance dissemination of ST22-MRSA-IV, an already successful MRSA clone.

  5. Numerical simulation of distributed parameter processes

    CERN Document Server

    Colosi, Tiberiu; Unguresan, Mihaela-Ligia; Muresan, Vlad

    2013-01-01

    The present monograph defines, interprets and uses the matrix of partial derivatives of the state vector with applications for the study of some common categories of engineering. The book covers broad categories of processes that are formed by systems of partial derivative equations (PDEs), including systems of ordinary differential equations (ODEs). The work includes numerous applications specific to Systems Theory based on Mpdx, such as parallel, serial as well as feed-back connections for the processes defined by PDEs. For similar, more complex processes based on Mpdx with PDEs and ODEs as components, we have developed control schemes with PID effects for the propagation phenomena, in continuous media (spaces) or discontinuous ones (chemistry, power system, thermo-energetic) or in electro-mechanics (railway – traction) and so on. The monograph has a purely engineering focus and is intended for a target audience working in extremely diverse fields of application (propagation phenomena, diffusion, hydrodyn...

  6. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  7. Towards distributed multiscale simulation of biological processes

    NARCIS (Netherlands)

    Bernsdorf, J.; Berti, G.; Chopard, B.; Hegewald, J.; Krafczyk, M.; Wang, D.; Lorenz, E.; Hoekstra, A.

    2011-01-01

    The understanding of biological processes, e.g. related to cardio-vascular disease and treatment, can significantly be improved by numerical simulation. In this paper, we present an approach for a multiscale simulation environment, applied for the prediction of in-stent re-stenos is. Our focus is on

  8. Trading Freshness for Performance in Distributed Systems

    Science.gov (United States)

    2014-12-01

    KDD, pages 69–77. ACM, 2011. 4.5.2 Garth Gibson, Gary Grider, Andree Jacobson, and Wyatt Lloyd . PRObE: A thousand-node experimental cluster for...latent dirichlet allocation. In NIPS, 2007. 4.1 Feng Niu, Benjamin Recht, Christopher R, and Stephen J. Wright . Hogwild: A lock-free approach to...parallelizing stochastic gradient descent. In In NIPS, 2011. 4.2.5 Daniel Peng and Frank Dabek. Large-scale incremental processing using distributed

  9. Modelling aspects of distributed processing in telecommunication networks

    NARCIS (Netherlands)

    Tomasgard, A; Audestad, JA; Dye, S; Stougie, L; van der Vlerk, MH; Wallace, SW

    1998-01-01

    The purpose of this paper is to formally describe new optimization models for telecommunication networks with distributed processing. Modem distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in

  10. Towards a data processing plane: An automata-based distributed dynamic data processing model

    NARCIS (Netherlands)

    Cushing, R.; Belloum, A.; Bubak, M.; de Laat, C.

    Data processing complexity, partitionability, locality and provenance play a crucial role in the effectiveness of distributed data processing. Dynamics in data processing necessitates effective modeling which allows the understanding and reasoning of the fluidity of data processing. Through

  11. Control of flexible structures with distributed sensing and processing

    Science.gov (United States)

    Ghosh, Dave; Montgomery, Raymond C.

    1994-01-01

    Technology is being developed to process signals from distributed sensors using distributed computations. These distributed sensors provide a new feedback capability for vibration control that has not been exploited. Additionally, the sensors proposed are of an optical and distributed nature and could be employed with known techniques of distributed optical computation (Fourier optics, etc.) to accomplish the control system functions of filtering and regulation in a distributed computer. This paper extends the traditional digital, optimal estimation and control theory to include distributed sensing and processing for this application. The design model assumes a finite number of modes which make it amenable to empirical determination of the design model via familiar modal-test techniques. The sensors are assumed to be distributed, but a finite number of point actuators are used. The design process is illustrated by application to a Euler beam. A simulation of the beam is used to design an optimal vibration control system that uses a distributed deflection sensor and nine linear force actuators. Simulations are also used to study the influence of design and processing errors on the performance.

  12. Processing and Distribution of STO2 Data

    Science.gov (United States)

    Goldsmith, Paul

    We propose in this ADAP to reduce the data obtained in the December 2016 flight of the STO2 Antarctic Balloon observatory. In just over 20 days of taking data, STO2 observed over 2.5 square degrees of the inner Milky Way in the 1900 GHz (158 m) fine structure line of ionized carbon ([CII]). This includes over 320,000 spectra with velocity resolution of 0.16 km/s and angular resolution 1 . In common with the higher bands of the Herschel HIFI instrument that also employed hot electron bolometer (HEB) mixers, there are significant baseline issues with the data that make reduction a significant challenge. Due to the year’s postponement of STO2 launch due to weather in 2015/16 season, funds for data analysis were largely redirected to support the team who enabled the successful launch and flight. A supplementary focused effort is thus needed to make STO2 data readily usable by the astronomical community, which is what we propose here. This ADAP will be a two-year program, including the following steps:: (1) Refine and optimize algorithms for excision of bad channels, correction for receiver gain changes, removal of variable bad baselines, final baseline adjustment, and verification of calibration. (2) Develop and integrated pipeline incorporating the optimized algorithms; process entire STO2 data set using the pipeline, and make an initial release of the data (DR1) to the public. (3) Refine data calibration including ancillary data sets coincident with the STO2 fields, make the data VO-compliant. (4) Write documentation for the pipeline and publish in appropriate journal; release final second data release (DR2) to the public, and hand off to permanent data repositories the NASA/IPAC IRSA database and the Harvard University Dataverse, and Cyverse, led by the University of Arizona. Members of the STO2 data reduction team have extensive experience with HIFI data, and particularly with the HEB fine structure spectra. We are thus confident that we can build on this

  13. Transparent checkpointing and process migration in a distributed system

    OpenAIRE

    2004-01-01

    A distributed system for creating a checkpoint for a plurality of processes running on the distributed system. The distributed system includes a plurality of compute nodes with an operating system executing on each compute node. A checkpoint library resides at the user level on each of the compute nodes, and the checkpoint library is transparent to the operating system residing on the same compute node and to the other compute nodes. Each checkpoint library uses a windowed messaging logging p...

  14. Tempered stable distributions stochastic models for multiscale processes

    CERN Document Server

    Grabchak, Michael

    2015-01-01

    This brief is concerned with tempered stable distributions and their associated Levy processes. It is a good text for researchers interested in learning about tempered stable distributions.  A tempered stable distribution is one which takes a stable distribution and modifies its tails to make them lighter. The motivation for this class comes from the fact that infinite variance stable distributions appear to provide a good fit to data in a variety of situations, but the extremely heavy tails of these models are not realistic for most real world applications. The idea of using distributions that modify the tails of stable models to make them lighter seems to have originated in the influential paper of Mantegna and Stanley (1994). Since then, these distributions have been extended and generalized in a variety of ways. They have been applied to a wide variety of areas including mathematical finance, biostatistics,computer science, and physics.

  15. Characteristics of the Audit Processes for Distributed Informatics Systems

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2009-01-01

    Full Text Available The paper contains issues regarding: main characteristics and examples of the distributed informatics systems and main difference categories among them, concepts, principles, techniques and fields for auditing the distributed informatics systems, concepts and classes of the standard term, characteristics of this one, examples of standards, guidelines, procedures and controls for auditing the distributed informatics systems. The distributed informatics systems are characterized by the following issues: development process, resources, implemented functionalities, architectures, system classes, particularities. The audit framework has two sides: the audit process and auditors. The audit process must be led in accordance with the standard specifications in the IT&C field. The auditors must meet the ethical principles and they must have a high-level of professional skills and competence in IT&C field.

  16. Mistaking geography for biology: inferring processes from species distributions.

    Science.gov (United States)

    Warren, Dan L; Cardillo, Marcel; Rosauer, Dan F; Bolnick, Daniel I

    2014-10-01

    Over the past few decades, there has been a rapid proliferation of statistical methods that infer evolutionary and ecological processes from data on species distributions. These methods have led to considerable new insights, but they often fail to account for the effects of historical biogeography on present-day species distributions. Because the geography of speciation can lead to patterns of spatial and temporal autocorrelation in the distributions of species within a clade, this can result in misleading inferences about the importance of deterministic processes in generating spatial patterns of biodiversity. In this opinion article, we discuss ways in which patterns of species distributions driven by historical biogeography are often interpreted as evidence of particular evolutionary or ecological processes. We focus on three areas that are especially prone to such misinterpretations: community phylogenetics, environmental niche modelling, and analyses of beta diversity (compositional turnover of biodiversity). Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  17. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  18. Wigner Ville Distribution in Signal Processing, using Scilab Environment

    Directory of Open Access Journals (Sweden)

    Petru Chioncel

    2011-01-01

    Full Text Available The Wigner Ville distribution offers a visual display of quantitative information about the way a signal’s energy is distributed in both, time and frequency. Through that, this distribution embodies the fundamentally concepts of the Fourier and time-domain analysis. The energy of the signal is distributed so that specific frequencies are localized in time by the group delay time and at specifics instants in time the frequency is given by the instantaneous frequency. The net positive volum of the Wigner distribution is numerically equal to the signal’s total energy. The paper shows the application of the Wigner Ville distribution, in the field of signal processing, using Scilab environment.

  19. Distributed learning process: principles of design and implementation

    Directory of Open Access Journals (Sweden)

    G. N. Boychenko

    2016-01-01

    Full Text Available At the present stage, broad information and communication technologies (ICT usage in educational practices is one of the leading trends of global education system development. This trend has led to the instructional interaction models transformation. Scientists have developed the theory of distributed cognition (Salomon, G., Hutchins, E., and distributed education and training (Fiore, S. M., Salas, E., Oblinger, D. G., Barone, C. A., Hawkins, B. L.. Educational process is based on two separated in time and space sub-processes of learning and teaching which are aimed at the organization of fl exible interactions between learners, teachers and educational content located in different non-centralized places.The purpose of this design research is to fi nd a solution for the problem of formalizing distributed learning process design and realization that is signifi cant in instructional design. The solution to this problem should take into account specifi cs of distributed interactions between team members, which becomes collective subject of distributed cognition in distributed learning process. This makes it necessary to design roles and functions of the individual team members performing distributed educational activities. Personal educational objectives should be determined by decomposition of team objectives into functional roles of its members with considering personal and learning needs and interests of students.Theoretical and empirical methods used in the study: theoretical analysis of philosophical, psychological, and pedagogical literature on the issue, analysis of international standards in the e-learning domain; exploration on practical usage of distributed learning in academic and corporate sectors; generalization, abstraction, cognitive modelling, ontology engineering methods.Result of the research is methodology for design and implementation of distributed learning process based on the competency approach. Methodology proposed by

  20. Parallel and distributed processing in power system simulation and control

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, Djalma M. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1994-12-31

    Recent advances in computer technology will certainly have a great impact in the methodologies used in power system expansion and operational planning as well as in real-time control. Parallel and distributed processing are among the new technologies that present great potential for application in these areas. Parallel computers use multiple functional or processing units to speed up computation while distributed processing computer systems are collection of computers joined together by high speed communication networks having many objectives and advantages. The paper presents some ideas for the use of parallel and distributed processing in power system simulation and control. It also comments on some of the current research work in these topics and presents a summary of the work presently being developed at COPPE. (author) 53 refs., 2 figs.

  1. Design and implementation of a distributed Complex Event Processing system

    Science.gov (United States)

    Li, Yan; Shang, Yanlei

    2017-01-01

    Making use of the massive events from event sources such as sensors and bank transactions and extract valuable information is of significant importance. Complex Event Processing (CEP), a method of detecting complex events from simple events stream, provides a solution of processing data in real time fast and efficiently. However, a single node CEP system can't satisfy requirements of processing massive event streams from multitudinous event sources. Therefore, this article designs a distributed CEP system, which combine Siddhi, a CEP engine, and Storm, a distributed real time computation architecture. This system can construct topology automatically based on the event streams and execution plans provided by users and process the event streams parallel. Compared with single node complex event system, the distributed system can achieve better performance.

  2. Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness

    DEFF Research Database (Denmark)

    These proceedings contain the papers selected for presentation at the Second International Workshop on Indoor Spatial Awareness, hosted by ACM SIGSPATIAL and held in conjunction with the18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS...... regulated access restrictions; they are relatively uniform and lack global landmarks; they feature special indoor positioning systems; and their representations are often poorly integrated with those of outdoor spaces. New theories, data models, and systems are needed in order to provide integrated......, seamless services across all spaces. For this reason, research has begun to extend the scope of location-based services and GIS to indoor spaces, with the objective of supporting indoor orientation and navigation services, emergency management, indoor space management, 3D cadastre, etc. This workshop...

  3. Critical Assessment of Temperature Distribution in Submerged Arc Welding Process

    Directory of Open Access Journals (Sweden)

    Vineet Negi

    2013-01-01

    Full Text Available Temperature distribution during any welding process holds the key for understanding and predicting several important welding attributes like heat affected zone, microstructure of the weld, residual stress, and distortion during welding. The accuracy of the analytical approaches for modeling temperature distribution during welding has been constrained by oversimplified assumptions regarding boundary conditions and material properties. In this paper, an attempt has been made to model the temperature distribution during submerged arc welding process using finite element modeling technique implemented in ANSYS v12. In the present analysis, heat source is assumed to be double-ellipsoidal with Gaussian volumetric heat generation. Furthermore, variation of material properties with temperature and both convective and radiant heat loss boundary condition have been considered. The predicted temperature distribution is then validated against the experimental results obtained by thermal imaging of the welded plate, and they are found to be in a good agreement.

  4. The process group approach to reliable distributed computing

    Science.gov (United States)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  5. Power Processing for Advanced Power Distribution and Control

    OpenAIRE

    TAKAHASHI, Ryo; AZUMA, Shun-ichi; HASEGAWA, Mikio; ANDO, Hiroyasu; HIKIHARA, Takashi

    2017-01-01

    A power packet dispatching system is proposed to realize the function of power on demand. This system distributes electrical power in quantized form, which is called power processing. This system has extensibility and flexibility. Here, we propose to use the power packet dispatching system as the next generation power distribution system in self-established and closed system such as robots, cars, and aircrafts. This paper introduces the concept and the required researches to take the power pa...

  6. Listening to professional voices: draft 2 of the ACM code of ethics and professional conduct

    OpenAIRE

    Flick, Catherine; Brinkman, Bo; Gotterbarn, D. W.; Miller, Keith; Vazansky, Kate; Wolf, Marty J.

    2017-01-01

    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. For the first time since 1992, the ACM Code of Ethics and Professional Conduct (the Code) is being updated. The Code Update Task Force in conjunction with the Committee on Professional Ethics is seeking advice from ACM members on the update. We indicated many of the motivations for changing the Code when we shared Draft 1 of Code 2018 with the ...

  7. Characterization of a novel arginine catabolic mobile element (ACME) and staphylococcal chromosomal cassette mec composite island with significant homology to Staphylococcus epidermidis ACME type II in methicillin-resistant Staphylococcus aureus genotype ST22-MRSA-IV.

    LENUS (Irish Health Repository)

    Shore, Anna C

    2011-05-01

    The arginine catabolic mobile element (ACME) is prevalent among methicillin-resistant Staphylococcus aureus (MRSA) isolates of sequence type 8 (ST8) and staphylococcal chromosomal cassette mec (SCCmec) type IVa (USA300) (ST8-MRSA-IVa isolates), and evidence suggests that ACME enhances the ability of ST8-MRSA-IVa to grow and survive on its host. ACME has been identified in a small number of isolates belonging to other MRSA clones but is widespread among coagulase-negative staphylococci (CoNS). This study reports the first description of ACME in two distinct strains of the pandemic ST22-MRSA-IV clone. A total of 238 MRSA isolates recovered in Ireland between 1971 and 2008 were investigated for ACME using a DNA microarray. Twenty-three isolates (9.7%) were ACME positive, and all were either MRSA genotype ST8-MRSA-IVa (7\\/23, 30%) or MRSA genotype ST22-MRSA-IV (16\\/23, 70%). Whole-genome sequencing and comprehensive molecular characterization revealed the presence of a novel 46-kb ACME and staphylococcal chromosomal cassette mec (SCCmec) composite island (ACME\\/SCCmec-CI) in ST22-MRSA-IVh isolates (n=15). This ACME\\/SCCmec-CI consists of a 12-kb DNA region previously identified in ACME type II in S. epidermidis ATCC 12228, a truncated copy of the J1 region of SCCmec type I, and a complete SCCmec type IVh element. The composite island has a novel genetic organization, with ACME located within orfX and SCCmec located downstream of ACME. One PVL locus-positive ST22-MRSA-IVa isolate carried ACME located downstream of SCCmec type IVa, as previously described in ST8-MRSA-IVa. These results suggest that ACME has been acquired by ST22-MRSA-IV on two independent occasions. At least one of these instances may have involved horizontal transfer and recombination events between MRSA and CoNS. The presence of ACME may enhance dissemination of ST22-MRSA-IV, an already successful MRSA clone.

  8. Distributed Signal Processing for Wireless EEG Sensor Networks.

    Science.gov (United States)

    Bertrand, Alexander

    2015-11-01

    Inspired by ongoing evolutions in the field of wireless body area networks (WBANs), this tutorial paper presents a conceptual and exploratory study of wireless electroencephalography (EEG) sensor networks (WESNs), with an emphasis on distributed signal processing aspects. A WESN is conceived as a modular neuromonitoring platform for high-density EEG recordings, in which each node is equipped with an electrode array, a signal processing unit, and facilities for wireless communication. We first address the advantages of such a modular approach, and we explain how distributed signal processing algorithms make WESNs more power-efficient, in particular by avoiding data centralization. We provide an overview of distributed signal processing algorithms that are potentially applicable in WESNs, and for illustration purposes, we also provide a more detailed case study of a distributed eye blink artifact removal algorithm. Finally, we study the power efficiency of these distributed algorithms in comparison to their centralized counterparts in which all the raw sensor signals are centralized in a near-end or far-end fusion center.

  9. Flexible Execution of Distributed Business Processes based on Process Instance Migration

    Directory of Open Access Journals (Sweden)

    Sonja Zaplata

    2010-07-01

    Full Text Available Many advanced business applications, collaborations, and virtual organizations are based on distributed business process management. As, in such scenarios, competition, fluctuation and dynamism increase continuously, the distribution and execution of individual process instances should become as flexible as possible in order to allow for an ad-hoc adaptation to changing conditions at runtime. However, most current approaches address process distribution by a fragmentation of processes already at design time. Such a static configuration can be assigned to different process engines near runtime, but can hardly be changed dynamically because distribution logic is weaved into the business process itself.A more dynamic segmentation of such distribution can be achieved by process runtime migration even without modifying the business logic of the original process model. Therefore, this contribution presents a migration data meta-model for enhancing such existing processes with the ability for runtime migration. The approach permits the inclusion of intensions and privacy requirements of both process modelers and initiators and supports execution strategies for sequential and parallel execution of processes. The contribution concludes with presenting a conceptual evaluation in which runtime migration has been applied to XPDL and WS-BPEL process instances and, based on these results, a qualitative comparison of migration and fragmentation.

  10. Particularities of Verification Processes for Distributed Informatics Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2013-01-01

    Full Text Available This paper presents distributed informatics applications and characteristics of their development cycle. It defines the concept of verification and there are identified the differences from software testing. Particularities of the software testing and software verification processes are described. The verification steps and necessary conditions are presented and there are established influence factors of quality verification. Software optimality verification is analyzed and some metrics are defined for the verification process.

  11. How to Read Probability Distributions as Statements about Process

    Directory of Open Access Journals (Sweden)

    Steven A. Frank

    2014-11-01

    Full Text Available Probability distributions can be read as simple expressions of information. Each continuous probability distribution describes how information changes with magnitude. Once one learns to read a probability distribution as a measurement scale of information, opportunities arise to understand the processes that generate the commonly observed patterns. Probability expressions may be parsed into four components: the dissipation of all information, except the preservation of average values, taken over the measurement scale that relates changes in observed values to changes in information, and the transformation from the underlying scale on which information dissipates to alternative scales on which probability pattern may be expressed. Information invariances set the commonly observed measurement scales and the relations between them. In particular, a measurement scale for information is defined by its invariance to specific transformations of underlying values into measurable outputs. Essentially all common distributions can be understood within this simple framework of information invariance and measurement scale.

  12. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  13. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  14. Ordinary multiplication of distributions. Application to control of economic processes

    Science.gov (United States)

    Kim, A. V.; Kormyshev, V. M.; Serova, N. B.; Fitina, L. N.; Kozhakhmetov, A. B.

    2017-11-01

    There exist many physical and economic models, which cannot be described in terms of usual functions. Such problems require application of the theory of distributions (generalized functions) (P. Antosik, J. Mikusinski, R. Sikorski, 1973; J. F. Colombeau, 1984; A.V. Kim, 2015, 1988; S.L. Sobolev, 1950; L Schwartz, 1950-1951). One of the first and the most important problems of the distribution theory consist in impossibility of defining a multiplication of distribution. The problem is so important that still is in the focus of researchers, because of various applications to nonlinear singular models. In the paper, an ordinary multiplication of generalized functions (distributions) is proposed. The obtained results are applied in a problem of control of economic processes.

  15. Building Big Flares: Constraining Generating Processes of Solar Flare Distributions

    Science.gov (United States)

    Wyse Jackson, T.; Kashyap, V.; McKillop, S.

    2015-12-01

    We address mechanisms which seek to explain the observed solar flare distribution, dN/dE ~ E1.8. We have compiled a comprehensive database, from GOES, NOAA, XRT, and AIA data, of solar flares and their characteristics, covering the year 2013. These datasets allow us to probe how stored magnetic energy is released over the course of an active region's evolution. We fit power-laws to flare distributions over various attribute groupings. For instance, we compare flares that occur before and after an active region reaches its maximum area, and show that the corresponding flare distributions are indistinguishable; thus, the processes that lead to magnetic reconnection are similar in both cases. A turnover in the distribution is not detectable at the energies accessible to our study, suggesting that a self-organized critical (SOC) process is a valid mechanism. However, we find changes in the distributions that suggest that the simple picture of an SOC where flares draw energy from an inexhaustible reservoir of stored magnetic energy is incomplete. Following the evolution of the flare distribution over the lifetimes of active regions, we find that the distribution flattens with time, and for larger active regions, and that a single power-law model is insufficient. This implies that flares that occur later in the lifetime of the active region tend towards higher energies. We conclude that the SOC process must have an upper bound. Increasing the scope of the study to include data from other years and more instruments will increase the robustness of these results. This work was supported by the NSF-REU Solar Physics Program at SAO, grant number AGS 1263241, NASA Contract NAS8-03060 to the Chandra X-ray Center and by NASA Hinode/XRT contract NNM07AB07C to SAO

  16. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  17. Post-processing procedure for industrial quantum key distribution systems

    International Nuclear Information System (INIS)

    Kiktenko, Evgeny; Trushechkin, Anton; Fedorov, Aleksey; Kurochkin, Yury

    2016-01-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered. (paper)

  18. Masses of Negative Multinomial Distributions: Application to Polarimetric Image Processing

    Directory of Open Access Journals (Sweden)

    Philippe Bernardoff

    2013-01-01

    Full Text Available This paper derives new closed-form expressions for the masses of negative multinomial distributions. These masses can be maximized to determine the maximum likelihood estimator of its unknown parameters. An application to polarimetric image processing is investigated. We study the maximum likelihood estimators of the polarization degree of polarimetric images using different combinations of images.

  19. Just-in-time Data Distribution for Analytical Query Processing

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); M.L. Kersten (Martin); F.E. Groffen (Fabian)

    2012-01-01

    textabstract Distributed processing commonly requires data spread across machines using a priori static or hash-based data allocation. In this paper, we explore an alternative approach that starts from a master node in control of the complete database, and a variable number of worker nodes

  20. Extra-Margins in ACM's Adjusted NMa ‘Mortgage-Rate-Calculation Method

    NARCIS (Netherlands)

    Dijkstra, M.; Schinkel, M.P.

    2013-01-01

    We analyse the development since 2004 of our concept of extra-margins on Dutch mortgages (Dijkstra & Schinkel, 2012), based on funding cost estimations in ACM (2013), which are an update of those in NMa (2011). Neither costs related to increased mortgage-specific risks, nor the inclusion of Basel

  1. Microstructure and chemical bonding of DLC films deposited on ACM rubber by PACVD

    NARCIS (Netherlands)

    Martinez-Martinez, D.; Schenkel, M.; Pei, Y.T.; Sánchez-López, J.C.; Hosson, J.Th.M. De

    2011-01-01

    The microstructure and chemical bonding of DLC films prepared by plasma assisted chemical vapor deposition on acrylic rubber (ACM) are studied in this paper. The temperature variation produced by the ion impingement during plasma cleaning and subsequent film deposition was used to modify the film

  2. Tribological performance of DLC films deposited on ACM rubber by PACVD

    NARCIS (Netherlands)

    Schenkel, M.; Martinez-Martinez, D.; Pei, Y.T.; Hosson, J.Th.M De

    2011-01-01

    In this paper the tribological and adhesive performance of DLC films by plasma assisted chemical vapor deposition on acrylic rubber (ACM) are studied. The effect of applied load and sliding velocity on the coefficient of friction and wear rate has been investigated. Effects of the rubber substrate

  3. ACME: A scalable parallel system for extracting frequent patterns from a very long sequence

    KAUST Repository

    Sahli, Majed

    2014-10-02

    Modern applications, including bioinformatics, time series, and web log analysis, require the extraction of frequent patterns, called motifs, from one very long (i.e., several gigabytes) sequence. Existing approaches are either heuristics that are error-prone, or exact (also called combinatorial) methods that are extremely slow, therefore, applicable only to very small sequences (i.e., in the order of megabytes). This paper presents ACME, a combinatorial approach that scales to gigabyte-long sequences and is the first to support supermaximal motifs. ACME is a versatile parallel system that can be deployed on desktop multi-core systems, or on thousands of CPUs in the cloud. However, merely using more compute nodes does not guarantee efficiency, because of the related overheads. To this end, ACME introduces an automatic tuning mechanism that suggests the appropriate number of CPUs to utilize, in order to meet the user constraints in terms of run time, while minimizing the financial cost of cloud resources. Our experiments show that, compared to the state of the art, ACME supports three orders of magnitude longer sequences (e.g., DNA for the entire human genome); handles large alphabets (e.g., English alphabet for Wikipedia); scales out to 16,384 CPUs on a supercomputer; and supports elastic deployment in the cloud.

  4. The major autolysin Acm2 from Lactobacillus plantarum undergoes cytoplasmic O-glycosylation

    NARCIS (Netherlands)

    Fredriksen, L.; Mathiesen, G.; Moen, A.; Bron, P.A.; Kleerebezem, M.; Eijsink, V.G.H.; Egge-Jacobsen, W.

    2012-01-01

    The major autolysin Acm2 from the probiotic strain Lactobacillus plantarum WCFS1 contains high proportions of alanine, serine, and threonine in its N-terminal so-called AST domain. It has been suggested that this extracellular protein might be glycosylated, but this has not been experimentally

  5. ACME algorithms for contact in a multiphysics environment API version 2.2.

    Energy Technology Data Exchange (ETDEWEB)

    Heinstein, Martin Wilhelm; Glass, Micheal W.; Gullerud, Arne S.; Brown, Kevin H.; Voth, Thomas Eugene; Jones, Reese E.

    2004-07-01

    An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

  6. An Educational Tool for Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2011-01-01

    of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...

  7. Radar data processing using a distributed computational system

    Science.gov (United States)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  8. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  9. Visualization of Stress Distribution on Ultrasonic Vibration Aided Drilling Process

    Science.gov (United States)

    Isobe, Hiromi; Uehara, Yusuke; Okada, Manabu; Horiuchi, Tomio; Hara, Keisuke

    The ultrasonically assisted machining is suitable to achieve sub-millimeter drilling on difficult-to-cut materials such as ceramics, hardened steel, glass and heat-resistant steel. However, it is difficult to observe the high-frequency and micron-scale phenomenon of ultrasonic cutting. In this report, high speed camera based on photoelastic analysis realized the visualization of stress distribution on drilling process. For the conventional drilling, the stress distribution diagram showed the intensive stress occurred under the chisel because the chisel edge of drill produces large plastic deformation. On the other hand, the ultrasonic drilling produced spread stress distribution and stress boundary far away from the chisel. Furthermore, chipping or cracking of inner wall of silica glass was influenced considerably by cutting fluid.

  10. Compiling software for a hierarchical distributed processing system

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  11. Evaluation of negative ion distribution changes by image processing diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, K., E-mail: ikeda.katsunori@lhd.nifs.ac.jp; Nakano, H.; Tsumori, K.; Kisaki, M.; Nagaoka, K.; Tokuzawa, T.; Osakabe, M.; Takeiri, Y.; Kaneko, O. [National Institute for Fusion Science, 322-6 Oroshi Toki Gifu, 509-5292 (Japan); Geng, S. [The Graduate University for Advanced Studies, Toki Gifu, 509-5292 (Japan)

    2015-04-08

    Distributions of hydrogen Balmer-α (H{sub α}) intensity and its reduction behavior close to a plasma grid (PG) surface have been observed by a spectrally selective imaging system in an arc discharge type negative hydrogen ion source in National Institute for Fusion Science. H{sub α} reduction indicates a reduction of negative hydrogen ions because the mutual neutralization process between H{sup +} and H{sup −} ions causes the dominant excitation process for H{sub α} emission in the rich H{sup −} condition such as in ionic plasma. We observed a significant change in H{sub α} reduction distribution due to change in the bias voltage, which is used to suppress the electron influx. Small H{sub α} reduction in higher bias is likely because the production of negative ions is suppressed by the potential difference between the plasma and PG surface.

  12. An Effective Framework for Distributed Geospatial Query Processing in Grids

    Directory of Open Access Journals (Sweden)

    CHEN, B.

    2010-08-01

    Full Text Available The emergence of Internet has greatly revolutionized the way that geospatial information is collected, managed, processed and integrated. There are several important research issues to be addressed for distributed geospatial applications. First, the performance of geospatial applications is needed to be considered in the Internet environment. In this regard, the Grid as an effective distributed computing paradigm is a good choice. The Grid uses a series of middleware to interconnect and merge various distributed resources into a super-computer with capability of high performance computation. Secondly, it is necessary to ensure the secure use of independent geospatial applications in the Internet environment. The Grid just provides the utility of secure access to distributed geospatial resources. Additionally, it makes good sense to overcome the heterogeneity between individual geospatial information systems in Internet. The Open Geospatial Consortium (OGC proposes a number of generalized geospatial standards e.g. OGC Web Services (OWS to achieve interoperable access to geospatial applications. The OWS solution is feasible and widely adopted by both the academic community and the industry community. Therefore, we propose an integrated framework by incorporating OWS standards into Grids. Upon the framework distributed geospatial queries can be performed in an interoperable, high-performance and secure Grid environment.

  13. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    Science.gov (United States)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  14. Syntactic processing is distributed across the language system.

    Science.gov (United States)

    Blank, Idan; Balewski, Zuzanna; Mahowald, Kyle; Fedorenko, Evelina

    2016-02-15

    Language comprehension recruits an extended set of regions in the human brain. Is syntactic processing localized to a particular region or regions within this system, or is it distributed across the entire ensemble of brain regions that support high-level linguistic processing? Evidence from aphasic patients is more consistent with the latter possibility: damage to many different language regions and to white-matter tracts connecting them has been shown to lead to similar syntactic comprehension deficits. However, brain imaging investigations of syntactic processing continue to focus on particular regions within the language system, often parts of Broca's area and regions in the posterior temporal cortex. We hypothesized that, whereas the entire language system is in fact sensitive to syntactic complexity, the effects in some regions may be difficult to detect because of the overall lower response to language stimuli. Using an individual-subjects approach to localizing the language system, shown in prior work to be more sensitive than traditional group analyses, we indeed find responses to syntactic complexity throughout this system, consistent with the findings from the neuropsychological patient literature. We speculate that such distributed nature of syntactic processing could perhaps imply that syntax is inseparable from other aspects of language comprehension (e.g., lexico-semantic processing), in line with current linguistic and psycholinguistic theories and evidence. Neuroimaging investigations of syntactic processing thus need to expand their scope to include the entire system of high-level language processing regions in order to fully understand how syntax is instantiated in the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. ARM Airborne Carbon Measurements (ARM-ACME) and ARM-ACME 2.5 Final Campaign Reports

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, S. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tom, M. S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sweeney, C. [NOAA Earth Systems Research Lab., Boulder, CO (United States)

    2016-01-01

    We report on a 5-year multi-institution and multi-agency airborne study of atmospheric composition and carbon cycling at the Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) site, with scientific objectives that are central to the carbon-cycle and radiative-forcing goals of the U.S. Global Change Research Program and the North American Carbon Program (NACP). The goal of these measurements is to improve understanding of 1) the carbon exchange of the Atmospheric Radiation Measurement (ARM) SGP region; 2) how CO2 and associated water and energy fluxes influence radiative-forcing, convective processes, and CO2 concentrations over the ARM SGP region, and 3) how greenhouse gases are transported on continental scales.

  16. Hadoop distributed batch processing for Gaia: a success story

    Science.gov (United States)

    Riello, Marco

    2015-12-01

    The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.

  17. Heat and work distributions for mixed Gauss–Cauchy process

    International Nuclear Information System (INIS)

    Kuśmierz, Łukasz; Gudowska-Nowak, Ewa; Rubi, J Miguel

    2014-01-01

    We analyze energetics of a non-Gaussian process described by a stochastic differential equation of the Langevin type. The process represents a paradigmatic model of a nonequilibrium system subject to thermal fluctuations and additional external noise, with both sources of perturbations considered as additive and statistically independent forcings. We define thermodynamic quantities for trajectories of the process and analyze contributions to mechanical work and heat. As a working example we consider a particle subjected to a drag force and two statistically independent Lévy white noises with stability indices α = 2 and α = 1. The fluctuations of dissipated energy (heat) and distribution of work performed by the force acting on the system are addressed by examining contributions of Cauchy fluctuations (α = 1) to either bath or external force acting on the system. (paper)

  18. Land processes distributed active archive center product lifecycle plan

    Science.gov (United States)

    Daucsavage, John C.; Bennett, Stacie D.

    2014-01-01

    The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center and the National Aeronautics and Space Administration (NASA) Earth Science Data System Program worked together to establish, develop, and operate the Land Processes (LP) Distributed Active Archive Center (DAAC) to provide stewardship for NASA’s land processes science data. These data are critical science assets that serve the land processes science community with potential value beyond any immediate research use, and therefore need to be accounted for and properly managed throughout their lifecycle. A fundamental LP DAAC objective is to enable permanent preservation of these data and information products. The LP DAAC accomplishes this by bridging data producers and permanent archival resources while providing intermediate archive services for data and information products.

  19. Digital Libraries / The Fourth ACM Conference on Digital Libraries, August 11-14, 1999, Berkeley, CA.

    OpenAIRE

    Rowe, Neil C.

    1999-01-01

    The Fourth ACM Conference on Digital Libraries, August 11-14, 1999, Berkeley, CA. New York, NY: Association for Computing Machinery, 1999, 274+12 pages, ISBN 1-58113-145-3. Digital libraries are the digital counterparts of traditional libraries of books and periodicals. They hold digital representations in minimally structured formats for all kinds of archival human-readable information ("documents"). Primarily they contain text, but now increasingly they include multimedia data lik...

  20. An Educational Tool for Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2011-01-01

    In this paper we try to describe how the Modular Interactive Tiles System (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing an educational hands-on tool that allows a change of representation...... of the abstract problems related to designing interactive parallel and distributed systems. Indeed, MITS seems to bring a series of goals into the education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, user and multiuser interaction, which can hardly be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples show how to implement interactive...

  1. An educational tool for interactive parallel and distributed processing

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2012-01-01

    In this article we try to describe how the modular interactive tiles system (MITS) can be a valuable tool for introducing students to interactive parallel and distributed processing programming. This is done by providing a handson educational tool that allows a change in the representation...... of abstract problems related to designing interactive parallel and distributed systems. Indeed, the MITS seems to bring a series of goals into education, such as parallel programming, distributedness, communication protocols, master dependency, software behavioral models, adaptive interactivity, feedback......, connectivity, topology, island modeling, and user and multi-user interaction which can rarely be found in other tools. Finally, we introduce the system of modular interactive tiles as a tool for easy, fast, and flexible hands-on exploration of these issues, and through examples we show how to implement...

  2. Parallel Distributed Processing Theory in the Age of Deep Networks.

    Science.gov (United States)

    Bowers, Jeffrey S

    2017-12-01

    Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.

  3. Melt-processed all-polymer distributed Bragg reflector laser.

    Science.gov (United States)

    Singer, Kenneth D; Kazmierczak, Tomasz; Lott, Joseph; Song, Hyunmin; Wu, Yeheng; Andrews, James; Baer, Eric; Hiltner, Anne; Weder, Christoph

    2008-07-07

    We have assembled and studied melt-processed all-polymer lasers comprising distributed Bragg reflectors that were fabricated in large sheets using a co-extrusion process and define the cavities for dye-doped compression-molded polymer gain core sheets. Distributed Bragg reflector (DBR) resonators consisting of 128 alternating poly(styrene) (PS) and poly(methyl methacrylate) (PMMA) layers were produced by multilayer co-extrusion. Gain media were fabricated by compression-molding thermoplastic host poly notmers doped with organic laser dyes. Both processing methods can be used in high-throughput roll-to-roll manufacturing. Optically pumped DBR lasers assembled from these components display single and multimode lasing in the reflection band of the resonators, with a slope efficiency of nearly 19% and lasing thresholds as low as 90microJ/cm(2). The lasing wavelength can be controlled via the layer thickness of the DBR resonator films, and variation of the laser dye. Studies of threshold and efficiency are in agreement with models for end-pumped lasers.

  4. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    Energy Technology Data Exchange (ETDEWEB)

    Dobay, M. P. D., E-mail: maria.pamela.david@physik.uni-muenchen.de; Alberola, A. Piera; Mendoza, E. R.; Raedler, J. O., E-mail: joachim.raedler@physik.uni-muenchen.de [Ludwig-Maximilians University, Faculty of Physics, Center for NanoScience (Germany)

    2012-03-15

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  5. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    International Nuclear Information System (INIS)

    Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.

    2012-01-01

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  6. Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras

    Science.gov (United States)

    Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.

    2012-03-01

    Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.

  7. IPNS distributed-processing data-acquisition system

    International Nuclear Information System (INIS)

    Haumann, J.R.; Daly, R.T.; Worlton, T.G.; Crawford, R.K.

    1981-01-01

    The Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory is a major new user-oriented facility which has come on line for basic research in neutron scattering and neutron radiation damage. This paper describes the distributed-processing data-acquisition system which handles data collection and instrument control for the time-of-flight neutron-scattering instruments. The topics covered include the overall system configuration, each of the computer subsystems, communication protocols linking each computer subsystem, and an overview of the software which has been developed

  8. Digital image processing for diameter distribution evaluation of nuclear tracks

    International Nuclear Information System (INIS)

    Lira, J.; Camacho, S.; Balcazar-Garcia, M.; Peralta-Fabi, R.

    1984-01-01

    Fast reliable, and accurate evaluation of diameter distribution of nuclear tracks, etched on solid state nuclear detectors is necessary, to infer general information from the particular ions detected. To achieve this, it is primarily required to develop an on-line method, that is, a method fast enough so as the reading and information extraction processes became simultaneous. In order to accomplish this, adaptive matched filtering has been generalized to two dimensions; this lessens the noise content and the unwanted features present in the detector, and avoids distorting results significantly. (author)

  9. Diversity and distribution of Listeria monocytogenes in meat processing plants.

    Science.gov (United States)

    Martín, Belén; Perich, Adriana; Gómez, Diego; Yangüela, Javier; Rodríguez, Alicia; Garriga, Margarita; Aymerich, Teresa

    2014-12-01

    Listeria monocytogenes is a major concern for the meat processing industry because many listeriosis outbreaks have been linked to meat product consumption. The aim of this study was to elucidate L. monocytogenes diversity and distribution across different Spanish meat processing plants. L. monocytogenes isolates (N = 106) collected from food contact surfaces of meat processing plants and meat products were serotyped and then characterised by multilocus sequence typing (MLST). The isolates were serotyped as 1/2a (36.8%), 1/2c (34%), 1/2b (17.9%) and 4b (11.3%). MLST identified ST9 as the most predominant allelic profile (33% of isolates) followed by ST121 (16%), both of which were detected from several processing plants and meat products sampled in different years, suggesting that those STs are highly adapted to the meat processing environment. Food contact surfaces during processing were established as an important source of L. monocytogenes in meat products because the same STs were obtained in isolates recovered from surfaces and products. L. monocytogenes was recovered after cleaning and disinfection procedures in two processing plants, highlighting the importance of thorough cleaning and disinfection procedures. Epidemic clone (EC) marker ECI was identified in 8.5%, ECIII was identified in 2.8%, and ECV was identified in 7.5% of the 106 isolates. Furthermore, a selection of presumably unrelated ST9 isolates was analysed by multi-virulence-locus sequence typing (MVLST). Most ST9 isolates had the same virulence type (VT11), confirming the clonal origin of ST9 isolates; however, one ST9 isolate was assigned to a new VT (VT95). Consequently, MLST is a reliable tool for identification of contamination routes and niches in processing plants, and MVLST clearly differentiates EC strains, which both contribute to the improvement of L. monocytogenes control programs in the meat industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Model Diagnostics for the Department of Energy's Accelerated Climate Modeling for Energy (ACME) Project

    Science.gov (United States)

    Smith, B.

    2015-12-01

    In 2014, eight Department of Energy (DOE) national laboratories, four academic institutions, one company, and the National Centre for Atmospheric Research combined forces in a project called Accelerated Climate Modeling for Energy (ACME) with the goal to speed Earth system model development for climate and energy. Over the planned 10-year span, the project will conduct simulations and modeling on DOE's most powerful high-performance computing systems at Oak Ridge, Argonne, and Lawrence Berkeley Leadership Compute Facilities. A key component of the ACME project is the development of an interactive test bed for the advanced Earth system model. Its execution infrastructure will accelerate model development and testing cycles. The ACME Workflow Group is leading the efforts to automate labor-intensive tasks, provide intelligent support for complex tasks and reduce duplication of effort through collaboration support. As part of this new workflow environment, we have created a diagnostic, metric, and intercomparison Python framework, called UVCMetrics, to aid in the testing-to-production execution of the ACME model. The framework exploits similarities among different diagnostics to compactly support diagnosis of new models. It presently focuses on atmosphere and land but is designed to support ocean and sea ice model components as well. This framework is built on top of the existing open-source software framework known as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT). Because of its flexible framework design, scientists and modelers now can generate thousands of possible diagnostic outputs. These diagnostics can compare model runs, compare model vs. observation, or simply verify a model is physically realistic. Additional diagnostics are easily integrated into the framework, and our users have already added several. Diagnostics can be generated, viewed, and manipulated from the UV-CDAT graphical user interface, Python command line scripts and programs

  11. Distributed Processing of Projections of Large Datasets: A Preliminary Study

    Science.gov (United States)

    Maddox, Brian G.

    2004-01-01

    Modern information needs have resulted in very large amounts of data being used in geographic information systems. Problems arise when trying to project these data in a reasonable amount of time and accuracy, however. Current single-threaded methods can suffer from two problems: fast projection with poor accuracy, or accurate projection with long processing time. A possible solution may be to combine accurate interpolation methods and distributed processing algorithms to quickly and accurately convert digital geospatial data between coordinate systems. Modern technology has made it possible to construct systems, such as Beowulf clusters, for a low cost and provide access to supercomputer-class technology. Combining these techniques may result in the ability to use large amounts of geographic data in time-critical situations.

  12. The CANDU 9 distributed control system design process

    International Nuclear Information System (INIS)

    Harber, J.E.; Kattan, M.K.; Macbeth, M.J.

    1997-01-01

    Canadian designed CANDU pressurized heavy water nuclear reactors have been world leaders in electrical power generation. The CANDU 9 project is AECL's next reactor design. Plant control for the CANDU 9 station design is performed by a distributed control system (DCS) as compared to centralized control computers, analog control devices and relay logic used in previous CANDU designs. The selection of a DCS as the platform to perform the process control functions and most of the data acquisition of the plant, is consistent with the evolutionary nature of the CANDU technology. The control strategies for the DCS control programs are based on previous CANDU designs but are implemented on a new hardware platform taking advantage of advances in computer technology. This paper describes the design process for developing the CANDU 9 DCS. Various design activities, prototyping and analyses have been undertaken in order to ensure a safe, functional, and cost-effective design. (author)

  13. Distributed and cooperative task processing: Cournot oligopolies on a graph.

    Science.gov (United States)

    Pavlic, Theodore P; Passino, Kevin M

    2014-06-01

    This paper introduces a novel framework for the design of distributed agents that must complete externally generated tasks but also can volunteer to process tasks encountered by other agents. To reduce the computational and communication burden of coordination between agents to perfectly balance load around the network, the agents adjust their volunteering propensity asynchronously within a fictitious trading economy. This economy provides incentives for nontrivial levels of volunteering for remote tasks, and thus load is shared. Moreover, the combined effects of diminishing marginal returns and network topology lead to competitive equilibria that have task reallocations that are qualitatively similar to what is expected in a load-balancing system with explicit coordination between nodes. In the paper, topological and algorithmic conditions are given that ensure the existence and uniqueness of a competitive equilibrium. Additionally, a decentralized distributed gradient-ascent algorithm is given that is guaranteed to converge to this equilibrium while not causing any node to over-volunteer beyond its maximum task-processing rate. The framework is applied to an autonomous-air-vehicle example, and connections are drawn to classic studies of the evolution of cooperation in nature.

  14. Molecular characteristics of clinical methicillin-resistant Staphylococcus pseudintermedius harboring arginine catabolic mobile element (ACME) from dogs and cats.

    Science.gov (United States)

    Yang, Ching; Wan, Min-Tao; Lauderdale, Tsai-Ling; Yeh, Kuang-Sheng; Chen, Charles; Hsiao, Yun-Hsia; Chou, Chin-Cheng

    2017-06-01

    This study aimed to investigate the presence of arginine catabolic mobile element (ACME) and its associated molecular characteristics in methicillin-resistant Staphylococcus pseudintermedius (MRSP). Among the 72 S. pseudintermedius recovered from various infection sites of dogs and cats, 52 (72.2%) were MRSP. ACME-arcA was detected commonly (69.2%) in these MRSP isolates, and was more frequently detected in those from the skin than from other body sites (P=0.047). There was a wide genetic diversity among the ACME-arcA-positive MRSP isolates, which comprised three SCCmec types (II-III, III and V) and 15 dru types with two predominant clusters (9a and 11a). Most MRSP isolates were multidrug-resistant. Since S. pseudintermedius could serve as a reservoir of ACME, further research on this putative virulence factor is recommended. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Application of signal processing techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Raza, Safdar; Mokhlis, Hazlie; Arof, Hamzah; Laghari, J.A.; Wang, Li

    2015-01-01

    Highlights: • Pros & cons of conventional islanding detection techniques (IDTs) are discussed. • Signal processing techniques (SPTs) ability in detecting islanding is discussed. • SPTs ability in improving performance of passive techniques are discussed. • Fourier, s-transform, wavelet, HHT & tt-transform based IDTs are reviewed. • Intelligent classifiers (ANN, ANFIS, Fuzzy, SVM) application in SPT are discussed. - Abstract: High penetration of distributed generation resources (DGR) in distribution network provides many benefits in terms of high power quality, efficiency, and low carbon emissions in power system. However, efficient islanding detection and immediate disconnection of DGR is critical in order to avoid equipment damage, grid protection interference, and personnel safety hazards. Islanding detection techniques are mainly classified into remote, passive, active, and hybrid techniques. From these, passive techniques are more advantageous due to lower power quality degradation, lower cost, and widespread usage by power utilities. However, the main limitations of these techniques are that they possess a large non detection zones and require threshold setting. Various signal processing techniques and intelligent classifiers have been used to overcome the limitations of passive islanding. Signal processing techniques, in particular, are adopted due to their versatility, stability, cost effectiveness, and ease of modification. This paper presents a comprehensive overview of signal processing techniques used to improve common passive islanding detection techniques. A performance comparison between the signal processing based islanding detection techniques with existing techniques are also provided. Finally, this paper outlines the relative advantages and limitations of the signal processing techniques in order to provide basic guidelines for researchers and field engineers in determining the best method for their system

  16. Timely Activation of Budding Yeast APCCdh1 Involves Degradation of Its Inhibitor, Acm1, by an Unconventional Proteolytic Mechanism

    Science.gov (United States)

    Melesse, Michael; Choi, Eunyoung; Hall, Hana; Walsh, Michael J.; Geer, M. Ariel; Hall, Mark C.

    2014-01-01

    Regulated proteolysis mediated by the ubiquitin proteasome system is a fundamental and essential feature of the eukaryotic cell division cycle. Most proteins with cell cycle-regulated stability are targeted for degradation by one of two related ubiquitin ligases, the Skp1-cullin-F box protein (SCF) complex or the anaphase-promoting complex (APC). Here we describe an unconventional cell cycle-regulated proteolytic mechanism that acts on the Acm1 protein, an inhibitor of the APC activator Cdh1 in budding yeast. Although Acm1 can be recognized as a substrate by the Cdc20-activated APC (APCCdc20) in anaphase, APCCdc20 is neither necessary nor sufficient for complete Acm1 degradation at the end of mitosis. An APC-independent, but 26S proteasome-dependent, mechanism is sufficient for complete Acm1 clearance from late mitotic and G1 cells. Surprisingly, this mechanism appears distinct from the canonical ubiquitin targeting pathway, exhibiting several features of ubiquitin-independent proteasomal degradation. For example, Acm1 degradation in G1 requires neither lysine residues in Acm1 nor assembly of polyubiquitin chains. Acm1 was stabilized though by conditional inactivation of the ubiquitin activating enzyme Uba1, implying some requirement for the ubiquitin pathway, either direct or indirect. We identified an amino terminal predicted disordered region in Acm1 that contributes to its proteolysis in G1. Although ubiquitin-independent proteasome substrates have been described, Acm1 appears unique in that its sensitivity to this mechanism is strictly cell cycle-regulated via cyclin-dependent kinase (Cdk) phosphorylation. As a result, Acm1 expression is limited to the cell cycle window in which Cdk is active. We provide evidence that failure to eliminate Acm1 impairs activation of APCCdh1 at mitotic exit, justifying its strict regulation by cell cycle-dependent transcription and proteolytic mechanisms. Importantly, our results reveal that strict cell-cycle expression profiles

  17. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  18. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    Science.gov (United States)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust

  19. Ion mixing in Ag-films on Si-substrates induced by a high fluence sup 4 sup 0 Ar sup + beam with a flux of 0.2 mu A/cm sup 2

    CERN Document Server

    Masoud, N M; Becker, K H

    2002-01-01

    Characteristics of ion mixing in thin Ag-films deposited onto Si-substrates were studied using the Rutherford backscattering (RBS) technique. The mixing was induced by a 400 keV sup 4 sup 0 Ar sup + beam with a flux of 0.2 mu A/cm sup 2 and fluences of up to 4x10 sup 1 sup 7 ions/cm sup 2. The concentration of Ag and Si atoms and their distributions in depth within the mixed region were determined. The RBS data indicate a clear broadening of the interfacial edges of Ag and Si distributions caused by atomic intermixing of the interface for doses above 7x10 sup 1 sup 6 ions/cm sup 2. The size of the intermixed region increases with increasing Ar fluence. Experimental findings also indicated that radiation-enhanced diffusion had not been totally eliminated. The mixing efficiency and diffusivity of Si and Ag were determined. Theoretical models were used to describe the mixing process. A comparison of our data with theory revealed that Ag diffuses in Si according to a local 'thermal spike' model. The above results...

  20. On the Control of Automatic Processes: A Parallel Distributed Processing Model of the Stroop Effect

    Science.gov (United States)

    1988-06-16

    F.N. (1973). The Stroop phenomenon and its use in the study of perceptual, cognitive , and response processes. Memory and Cognition , 1, 106-120. Gatti...189-207. Logan, G.D. (1980). Attention and automaticity in Stroop and priming tasks: Theory and data. Cognitive Psychology, 12, 523-553. Logan, D.G...Dlh’i! FILE COI’_ C0 ON THE CONTROL OF AUTOMATIC PROCESSES: (N A PARALLEL DISTRIBUTED PROCESSING MODEL OF THE STROOP EFFECT Technical Report AIP - 40

  1. Fuel distribution process risk analysis in East Borneo

    Directory of Open Access Journals (Sweden)

    Laksmita Raizsa

    2018-01-01

    Full Text Available Fuel distribution is an important aspect of fulfilling the customer’s need. It is risky because it can cause tardiness that can cause fuel scarcity. In the process of distribution, many risks are occurring. House of Risk is a method used for mitigating the risk. It identifies seven risk events and nine risk agents. Matrix occurrence and severity are used for eliminating the minor impact risk. House of Risk 1 is used for determining the Aggregate Risk Potential (ARP. Pareto diagram is applied to prioritize risk that must be mitigated by preventive actions based on ARP. It identifies 4 priority risks, namely A8 (Car trouble, A4 (Human Error, A3 (Error deposit via bank and underpayment, and A6 (traffic accident which should be mitigated. House of Risk 2 makes for mapping between the preventive action and risk agent. It gets the Effectiveness to Difficulty Ratio (ETD for mitigating action. Conducting safety talk routine once every three days with ETD 2088 is the primary preventive actions.

  2. Calculation of the spallation product distribution in the evaporation process

    International Nuclear Information System (INIS)

    Nishida, T.; Kanno, I.; Nakahara, Y.; Takada, H.

    1989-01-01

    Some investigations are performed for the calculational model of nuclear spallation reaction in the evaporation process. A new version of a spallation reaction simulation code NUCLEUS has been developed by incorporating the newly revised Uno and Yamada's mass formula and extending the counting region of produced nuclei. The differences between the new and original mass formulas are shown in the comparisons of mass excess values. The distributions of spallation products of a uranium target nucleus bombarded by energy (0.38 - 2.9 GeV) protons have been calculated with the new and original versions of NUCLEUS. In the fission component Uno and Yamada's mass formula reproduces the measured data obtained from thin foil experiments significantly better, especially in the neutron excess side, than the combination of the Cameron's mass formula and the mass table compiled by Wapstra, et al., in the original version of NUCLEUS. Discussions are also made on how the mass-yield distribution of products varies dependent on the level density parameter α characterizing the particle evaporation. (author)

  3. Calculation of the spallation product distribution in the evaporation process

    International Nuclear Information System (INIS)

    Nishida, T.; Kanno, I.; Nakahara, Y.; Takada, H.

    1989-01-01

    Some investigations are performed for the calculational model of nuclear spallation reaction in the evaporation process. A new version of a spallation reaction simulation code NUCLEUS has been developed by incorporating the newly revised Uno ampersand Yamada's mass formula and extending the counting region of produced nuclei. The differences between the new and original mass formulas are shown in the comparisons of mass excess values. The distributions of spallation products of a uranium target nucleus bombarded by energy (0.38 - 2.9 GeV) protons have been calculated with the new and original versions of NUCLEUS. In the fission component Uno ampersand Yamada's mass formula reproduces the measured data obtained from thin foil experiments significantly better, especially in the neutron excess side, than the combination of the Cameron's mass formula and the mass table compiled by Wapstra, et al., in the original version of NUCLEUS. Discussions are also made on how the mass-yield distribution of products varies dependent on the level density parameter a characterizing the particle evaporation. 16 refs., 7 figs., 1 tab

  4. Influence of particle size distribution on nanopowder cold compaction processes

    Science.gov (United States)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  5. Radon: Chemical and physical processes associated with its distribution

    International Nuclear Information System (INIS)

    Castleman, A.W. Jr.

    1992-01-01

    Assessing the mechanisms which govern the distribution, fate, and pathways of entry into biological systems, as well as the ultimate hazards associated with the radon progeny and their secondary reaction products, depends on knowledge of their chemistry. Our studies are directed toward developing fundamental information which will provide a basis for modeling studies that are requisite in obtaining a complete picture of growth, attachment to aerosols, and transport to the bioreceptor and ultimate incorporation within. Our program is divided into three major areas of research. These include measurement of the determination of their mobilities, study of the role of radon progeny ions in affecting reactions, including study of the influence of the degree of solvation (clustering), and examination of the important secondary reaction products, with particular attention to processes leading to chemical conversion of either the core ions or the ligands as a function of the degree of clustering

  6. Proceedings of the 16th ACM SIGPLAN international conference on Functional programming

    DEFF Research Database (Denmark)

    Danvy, Olivier

    Welcome to the 16th ACM SIGPLAN International Conference on Functional Programming -- ICFP'11. The picture, on the front cover, is of Mount Fuji, seen from the 20th floor of the National Institute of Informatics (NII). It was taken by Sebastian Fischer in January 2011. In Japanese, the characters......, which took place on May 26-27, Avenue d'Italie, in Paris, courtesy of the PPS Laboratory (Preuves, Programmes et Systèmes) at the University of Paris 7. Notifications were sent within hours of the completion of the PC meeting, and updated reviews in the course of the following week....

  7. Study of cache performance in distributed environment for data processing

    International Nuclear Information System (INIS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-01-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set

  8. An unexpected location of the arginine catabolic mobile element (ACME in a USA300-related MRSA strain.

    Directory of Open Access Journals (Sweden)

    Mette Damkjær Bartels

    Full Text Available In methicillin resistant Staphylococcus aureus (MRSA, the arginine catabolic mobile element (ACME was initially described in USA300 (t008-ST8 where it is located downstream of the staphylococcal cassette chromosome mec (SCCmec. A common health-care associated MRSA in Copenhagen, Denmark (t024-ST8 is clonally related to USA300 and is frequently PCR positive for the ACME specific arcA-gene. This study is the first to describe an ACME element upstream of the SCCmec in MRSA. By traditional SCCmec typing schemes, the SCCmec of t024-ST8 strain M1 carries SCCmec IVa, but full sequencing of the cassette revealed that the entire J3 region had no homology to published SCCmec IVa. Within the J3 region of M1 was a 1705 bp sequence only similar to a sequence in S. haemolyticus strain JCSC1435 and 2941 bps with no homology found in GenBank. In addition to the usual direct repeats (DR at each extremity of SCCmec, M1 had two new DR between the orfX gene and the J3 region of the SCCmec. The region between the orfX DR (DR1 and DR2 contained the ccrAB4 genes. An ACME II-like element was located between DR2 and DR3. The entire 26,468 bp sequence between DR1 and DR3 was highly similar to parts of the ACME composite island of S. epidermidis strain ATCC12228. Sequencing of an ACME negative t024-ST8 strain (M299 showed that DR1 and the sequence between DR1 and DR3 was missing. The finding of a mobile ACME II-like element inserted downstream of orfX and upstream of SCCmec indicates a novel recombination between staphylococcal species.

  9. Process and Data Management in a Reconfigurable Distributed Network.

    Science.gov (United States)

    1984-10-15

    INTELIGENT CONTROL IN LOCAL DISTRIBUTED ENVIRONMENT 8.1. Introduction Both database (DB) and artificial intelligence (Al) systems must represent and...distributed job and to have a distributed Make program to distribute the load of computation to different nodeq to achieve high parallelism and handle...discussed in Chapter 7. Finally, we are also interested in the application of both Artificial Intelligence and Database technologies to a dynamic network

  10. Stationary distributions of stochastic processes described by a linear neutral delay differential equation

    International Nuclear Information System (INIS)

    Frank, T D

    2005-01-01

    Stationary distributions of processes are derived that involve a time delay and are defined by a linear stochastic neutral delay differential equation. The distributions are Gaussian distributions. The variances of the Gaussian distributions are either monotonically increasing or decreasing functions of the time delays. The variances become infinite when fixed points of corresponding deterministic processes become unstable. (letter to the editor)

  11. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  12. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  13. Distributed cooperating processes in a mobile robot control system

    Science.gov (United States)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  14. Improving simulated long-term responses of vegetation to temperature and precipitation extremes using the ACME land model

    Science.gov (United States)

    Ricciuto, D. M.; Warren, J.; Guha, A.

    2017-12-01

    While carbon and energy fluxes in current Earth system models generally have reasonable instantaneous responses to extreme temperature and precipitation events, they often do not adequately represent the long-term impacts of these events. For example, simulated net primary productivity (NPP) may decrease during an extreme heat wave or drought, but may recover rapidly to pre-event levels following the conclusion of the extreme event. However, field measurements indicate that long-lasting damage to leaves and other plant components often occur, potentially affecting the carbon and energy balance for months after the extreme event. The duration and frequency of such extreme conditions is likely to shift in the future, and therefore it is critical for Earth system models to better represent these processes for more accurate predictions of future vegetation productivity and land-atmosphere feedbacks. Here we modify the structure of the Accelerated Climate Model for Energy (ACME) land surface model to represent long-term impacts and test the improved model against observations from experiments that applied extreme conditions in growth chambers. Additionally, we test the model against eddy covariance measurements that followed extreme conditions at selected locations in North America, and against satellite-measured vegetation indices following regional extreme events.

  15. On-resin conversion of Cys(Acm)-containing peptides to their corresponding Cys(Scm) congeners.

    Science.gov (United States)

    Mullen, Daniel G; Weigel, Benjamin; Barany, George; Distefano, Mark D

    2010-05-01

    The Acm protecting group for the thiol functionality of cysteine is removed under conditions (Hg(2+)) that are orthogonal to the acidic milieu used for global deprotection in Fmoc-based solid-phase peptide synthesis. This use of a toxic heavy metal for deprotection has limited the usefulness of Acm in peptide synthesis. The Acm group may be converted to the Scm derivative that can then be used as a reactive intermediate for unsymmetrical disulfide formation. It may also be removed by mild reductive conditions to generate unprotected cysteine. Conversion of Cys(Acm)-containing peptides to their corresponding Cys(Scm) derivatives in solution is often problematic because the sulfenyl chloride reagent used for this conversion may react with the sensitive amino acids tyrosine and tryptophan. In this protocol, we report a method for on-resin Acm to Scm conversion that allows the preparation of Cys(Scm)-containing peptides under conditions that do not modify other amino acids. (c) 2010 European Peptide Society and John Wiley & Sons, Ltd.

  16. An Unexpected Location of the Arginine Catabolic Mobile Element (ACME) in a USA300-Related MRSA Strain

    DEFF Research Database (Denmark)

    Damkjær Bartels, Mette; Hansen, Lars H.; Boye, Kit

    2011-01-01

    In methicillin resistant Staphylococcus aureus (MRSA), the arginine catabolic mobile element (ACME) was initially described in USA300 (t008-ST8) where it is located downstream of the staphylococcal cassette chromosome mec (SCCmec). A common health-care associated MRSA in Copenhagen, Denmark (t024...... of SCCmec, M1 had two new DR between the orfX gene and the J3 region of the SCCmec. The region between the orfX DR (DR1) and DR2 contained the ccrAB4 genes. An ACME II-like element was located between DR2 and DR3. The entire 26,468 bp sequence between DR1 and DR3 was highly similar to parts of the ACME...... composite island of S. epidermidis strain ATCC12228. Sequencing of an ACME negative t024-ST8 strain (M299) showed that DR1 and the sequence between DR1 and DR3 was missing. The finding of a mobile ACME II-like element inserted downstream of orfX and upstream of SCCmec indicates a novel recombination between...

  17. ADAPTIF CONSERVATION (ACM MODEL IN INCREASING FAMILY SUPPORT AND COMPLIANCE TREATMENT IN PATIENT WITH PULONARY TUBERCULOSIS IN SURABAYA CITY REGION

    Directory of Open Access Journals (Sweden)

    Siti Nur Kholifah

    2017-04-01

    Full Text Available Introduction: Tuberculosis (TB in Indonesia is still health problem and the prevalence rate is high. Discontinuing medication and lack of family support are the causalities. Numbers of strategies to overcome are seemingly not succeeded. Roles and responsibilities of family nursing are crucial to improve participation, motivation of individual, family and community in prevention, including pulmonary tuberculosis. Unfortunately, models of pulmonary tuberculosis currently unavailable. The combination of adaptation and conservation in complementarily improving family support and compliance in medication is introduced in this study. Method: This research intended to analyze Adaptive Conservation Model (ACM in extending family support and treatment compliance. Modeling steps including model analysis, expert validation, field trial, implementation and recommending the output model. Research subject involves 15 families who implement family Assistance and supervision in Medication (ASM and other 15 families with ACM. Result: The study revealed ACM is better than ASM on the case of family support and medication compliances. It supports the role of environment as influential factor on individual health belief, values and decision making. Therefore, it is advised to apply ACM in enhancing family support and compliance of pulmonary TB patients. Discussion: Social and family supports to ACM group obtained by developing interaction through communication. Family interaction necessary to improve family support to pulmonary tuberculosis patients. And social support plays as motivator to maintain compliance on medication

  18. Distributed Prognostic Health Management with Gaussian Process Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such...

  19. Control of automatic processes: A parallel distributed-processing account of the Stroop effect. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1989-11-22

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a processing pathway and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the back propagation learning algorithm (Rumelhart, Hinton, Williams, 1986). The model is able to simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task which manipulate SOA, response set, and degree of practice. In the discussion we contrast our model with other models, and indicate how it relates to many of the central issues in the literature on attention, automaticity, and interference.

  20. A model for the distribution channels planning process

    NARCIS (Netherlands)

    Neves, M.F.; Zuurbier, P.; Campomar, M.C.

    2001-01-01

    Research of existing literature reveals some models (sequence of steps) for companies that want to plan distribution channels. None of these models uses strong contributions from transaction cost economics, bringing a possibility to elaborate on a "distribution channels planning model", with these

  1. Distributed Prognostic Health Management with Gaussian Process Regression

    Science.gov (United States)

    Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank

    2010-01-01

    Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.

  2. Standard services for the capture, processing, and distribution of packetized telemetry data

    Science.gov (United States)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  3. Proceedings of the 22nd ACM conference on Hypertext and hypermedia

    DEFF Research Database (Denmark)

    at the University of Southampton and former president of ACM, was one of the first computer scientists to study multimedia and hypermedia and has been at the forefront of the hypertext research community throughout her career. She is also one of the initiators of Web Science as a new research field. Prof. Dr....... Noshir Contractor investigates factors that lead to the formation, maintenance and dissolution of dynamically linked social and knowledge networks. The Program Committee and the reviewers worked hard to select an excellent program from 104 paper submissions. We accepted 31 full papers and 4 short papers....... Before, during and after the conference the hypertext community has been kept up to date on all the initiatives and activities thanks to our webmaster Natalia Stash. The practical local arrangements are a team effort of locals from Eindhoven, coordinated by Riet van Buul. The program with (free...

  4. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    Science.gov (United States)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  5. Additive Construction with Mobile Emplacement (ACME) / Automated Construction of Expeditionary Structures (ACES) Materials Delivery System (MDS)

    Science.gov (United States)

    Mueller, R. P.; Townsend, I. I.; Tamasy, G. J.; Evers, C. J.; Sibille, L. J.; Edmunson, J. E.; Fiske, M. R.; Fikes, J. C.; Case, M.

    2018-01-01

    The purpose of the Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) project is to incorporate the Liquid Goods Delivery System (LGDS) into the Dry Goods Delivery System (DGDS) structure to create an integrated and automated Materials Delivery System (MDS) for 3D printing structures with ordinary Portland cement (OPC) concrete. ACES 3 is a prototype for 3-D printing barracks for soldiers in forward bases, here on Earth. The LGDS supports ACES 3 by storing liquid materials, mixing recipe batches of liquid materials, and working with the Dry Goods Feed System (DGFS) previously developed for ACES 2, combining the materials that are eventually extruded out of the print nozzle. Automated Construction of Expeditionary Structures, Phase 3 (ACES 3) is a project led by the US Army Corps of Engineers (USACE) and supported by NASA. The equivalent 3D printing system for construction in space is designated Additive Construction with Mobile Emplacement (ACME) by NASA.

  6. Prediction of residence time distributions in food processing machinery

    DEFF Research Database (Denmark)

    Karlson, Torben; Friis, Alan; Szabo, Peter

    1996-01-01

    The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet....

  7. Distributed collaborative team effectiveness: measurement and process improvement

    Science.gov (United States)

    Wheeler, R.; Hihn, J.; Wilkinson, B.

    2002-01-01

    This paper describes a measurement methodology developed for assessing the readiness, and identifying opportunities for improving the effectiveness, of distributed collaborative design teams preparing to conduct a coccurent design session.

  8. Representation and processing of structures with binary sparse distributed codes

    OpenAIRE

    Rachkovskij, Dmitri A.

    1999-01-01

    The schemes for compositional distributed representations include those allowing on-the-fly construction of fixed dimensionality codevectors to encode structures of various complexity. Similarity of such codevectors takes into account both structural and semantic similarity of represented structures. In this paper we provide a comparative description of sparse binary distributed representation developed in the frames of the Associative-Projective Neural Network architecture and more well-know...

  9. Prediction of residence time distributions in food processing machinery

    DEFF Research Database (Denmark)

    Karlson, Torben; Friis, Alan; Szabo, Peter

    1996-01-01

    The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet.......The velocity field in a co-rotating disc scraped surface heat exchanger (CDHE) is calculated using a finite element method. The residence time distribution for the CDHE is then obtained by tracing particles introduced in the inlet....

  10. The Cognitive Processes Used in Team Collaboration During Asynchronous, Distributed Decision Making

    Science.gov (United States)

    2004-06-01

    the ACM, 39 (4). P. 88 – 97. Sternberg, R. J., & J.E. Davidson (Eds.) (1986). Conceptions of giftedness . New York: Cambridge University Press...Davidson, J.E. & Sternberg, R.J. (1984). The role of insight in intellectual giftedness . Child Quarterly, 28, 58-64. Davidson, J.E

  11. Ionization processes in a transient hollow cathode discharge before electric breakdown: statistical distribution

    International Nuclear Information System (INIS)

    Zambra, M.; Favre, M.; Moreno, J.; Wyndham, E.; Chuaqui, H.; Choi, P.

    1998-01-01

    The charge formation processes in a hollow cathode region (HCR) of transient hollow cathode discharge have been studied at the final phase. The statistical distribution that describe different processes of ionization have been represented by Gaussian distributions. Nevertheless, was observed a better representation of these distributions when the pressure is near a minimum value, just before breakdown

  12. 40 CFR 761.80 - Manufacturing, processing and distribution in commerce exemptions.

    Science.gov (United States)

    2010-07-01

    ... distribution in commerce exemptions. 761.80 Section 761.80 Protection of Environment ENVIRONMENTAL PROTECTION..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Exemptions § 761.80 Manufacturing, processing and distribution in commerce exemptions. (a) The Administrator grants the following petitioner(s) an exemption for...

  13. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    Science.gov (United States)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  14. The redesign of a warranty distribution network with recovery processes

    NARCIS (Netherlands)

    Ashayeri, J.; Ma, N.; Sotirov, R.

    A warranty distribution network provides aftersales warranty services to customers and resembles a closed-loop supply chain network with specific challenges for reverse flows management like recovery, repair, and reflow of refurbished products. We present here a nonlinear and nonconvex mixed integer

  15. Decentralized Control of Scheduling in Distributed Processing Systems.

    Science.gov (United States)

    1983-06-20

    distributed file systems be embedded in the operating system and when should a file server model be used? How should the operating system itself be...USACECOM ATTN: DRSEL-COM-RF (Dr. Klein) Fort Monmouth, N.J. 07703 Comander 1 copy USACECOM ATTN: DRSEL-COM-D (E. Famolari) Fort Monmouth, N.J. 07703

  16. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  17. Distributed automatic control of technological processes in conditions of weightlessness

    Science.gov (United States)

    Kukhtenko, A. I.; Merkulov, V. I.; Samoylenko, Y. I.; Ladikov-Royev, Y. P.

    1986-01-01

    Some problems associated with the automatic control of liquid metal and plasma systems under conditions of weightlessness are examined, with particular reference to the problem of stability of liquid equilibrium configurations. The theoretical fundamentals of automatic control of processes in electrically conducting continuous media are outlined, and means of using electromagnetic fields for simulating technological processes in a space environment are discussed.

  18. Processes determining the marine alkalinity and carbonate saturation distributions

    OpenAIRE

    B. R. Carter; J. R. Toggweiler; R. M. Key; J. L. Sarmiento

    2014-01-01

    We introduce a composite tracer, Alk*, that has a global distribution primarily determined by CaCO3 precipitation and dissolution. Alk* also highlights riverine alkalinity plumes that are due to dissolved calcium carbonate from land. We estimate the Arctic receives approximately twice the riverine alkalinity per unit area as the Atlantic, and 8 times that of the other oceans. Riverine inputs broadly elevate Alk* in the Arctic surface and particularly near ri...

  19. Parallel Hyperspectral Image Processing on Distributed Multi-Cluster Systems

    NARCIS (Netherlands)

    Liu, F.; Seinstra, F.J.; Plaza, A.J.

    2011-01-01

    Computationally efficient processing of hyperspectral image cubes can be greatly beneficial in many application domains, including environmental modeling, risk/hazard prevention and response, and defense/security. As individual cluster computers often cannot satisfy the computational demands of

  20. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  1. Development of a tropical ecological forecasting strategy for ENSO based on the ACME modeling framework

    Science.gov (United States)

    Hoffman, F. M.; Xu, M.; Collier, N.; Xu, C.; Christoffersen, B. O.; Luo, Y.; Ricciuto, D. M.; Levine, P. A.; Randerson, J. T.

    2016-12-01

    The El Niño Southern Oscillation (ENSO) is an irregular periodic climate fluctuation, occurring every eight to 12 years, that is driven by variations in sea surface temperatures (SSTs) over the tropical eastern Pacific Ocean and extending westward across the equatorial Pacific. El Niño, the warming phase of ENSO, has strong effects on the global carbon cycle. Strong drying conditions in the Asia-Pacific region and western South America during El Niño lead to reduced ecosystem productivity and increased mortality and fire risk. The intensity of the 2015-2016 ENSO event rivaled or exceeded that of the 1997-1998 event, which was the strongest well-observed El Niño on record. We performed a set of simulations using the U.S. Department of Energy's Accelerated Climate Modeling for Energy (ACMEv0.3) model, forced with prescribed sea surface temperatures, to study the responses and feedbacks of drought effects on terrestrial ecosystems induced by both of these events. The ACME model was configured to run with active atmosphere and land models alongside the "data" ocean and thermodynamic sea ice models. The Community Atmosphere Model used the Spectral Element dynamical core (CAM-SE) operating on the ne30 ( 1°) grid, and the ACME Land Model (ALM) was equivalent to the Community Land Model with prognostic biogeochemistry (CLM4.5-BGC). Using Optimal Interpolation SSTs (OISSTv2) and predicted SST anomalies from NCEP's Climate Forecast System (CFSv2) as forcing, we conducted a transient simulation from 1995 to 2020, following a spin up simulation, and analyzed the ENSO impacts on tropical terrestrial ecosystems for the 5-year periods centered on these two strong ENSO events. During the transient simulation, we saved the resulting atmospheric forcing, which included prognostic biosphere-atmosphere interactions, every three hours for use in future offline simulation for model development and testing. We will present simulation results, focusing on hydroclimatic anomalies as

  2. Prescription-induced jump distributions in multiplicative Poisson processes

    Science.gov (United States)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  3. High Frequency Scattering Code in a Distributed Processing Environment

    Science.gov (United States)

    1991-06-01

    for output The first option is simple, but involves post -processing the individual data files to foim a composite output file. This would add an...for data structures that are conducive to consc-lidation. The use of a post -process for the actual output of results is viable, though use of the...May 1991. 94 Imagen Laser Printer (im132) ; Owner ssuhr Host wbl7 /printer im132 Date Sat Jun 1 02:02:43 1991 o User ssuhr o formlength 66 /l

  4. Novel scaling of the multiplicity distributions in the sequential fragmentation process and in the percolation

    International Nuclear Information System (INIS)

    Botet, R.

    1996-01-01

    A novel scaling of the multiplicity distributions is found in the shattering phase of the sequential fragmentation process with inhibition. The same scaling law is shown to hold in the percolation process. (author)

  5. Learning from the History of Distributed Query Processing

    DEFF Research Database (Denmark)

    Betz, Heiko; Gropengießer, Francis; Hose, Katja

    2012-01-01

    The vision of the Semantic Web has triggered the development of various new applications and opened up new directions in research. Recently, much effort has been put into the development of techniques for query processing over Linked Data. Being based upon techniques originally developed for dist...

  6. The certification process of the LHCb distributed computing software

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    DIRAC contains around 200 thousand lines of python code, and LHCbDIRAC around 120 thousand. The testing process for each release consists of a number of steps, that includes static code analysis, unit tests, integration tests, regression tests, system tests. We dubbed the full p...

  7. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  8. A trial of distributed portable data acquisition and processing system implementation: the qdpb - data processing with branchpoints

    International Nuclear Information System (INIS)

    Gritsaj, K.I.; Isupov, A.Yu.

    2001-01-01

    A trial of distributed portable data acquisition and processing system qdpb is issued. An experimental setup data and hardware dependent code is separated from the generic part of the qdpb system. The generic part implementation is described

  9. Log-normal distribution from a process that is not multiplicative but is additive.

    Science.gov (United States)

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  10. Process of Market Strategy Optimization Using Distributed Computing Systems

    Directory of Open Access Journals (Sweden)

    Nowicki Wojciech

    2015-12-01

    Full Text Available If market repeatability is assumed, it is possible with some real probability to deduct short term market changes by making some calculations. The algorithm, based on logical and statistically reasonable scheme to make decisions about opening or closing position on a market, is called an automated strategy. Due to market volatility, all parameters are changing from time to time, so there is need to constantly optimize them. This article describes a team organization process when researching market strategies. Individual team members are merged into small groups, according to their responsibilities. The team members perform data processing tasks through a cascade organization, providing solutions to speed up work related to the use of remote computing resources. They also work out how to store results in a suitable way, according to the type of task, and facilitate the publication of a large amount of results.

  11. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  12. On process capability and system availability analysis of the inverse Rayleigh distribution

    Directory of Open Access Journals (Sweden)

    Sajid Ali

    2015-04-01

    Full Text Available In this article, process capability and system availability analysis is discussed for the inverse Rayleigh lifetime distribution. Bayesian approach with a conjugate gamma distribution is adopted for the analysis. Different types of loss functions are considered to find Bayes estimates of the process capability and system availability. A simulation study is conducted for the comparison of different loss functions.

  13. Equivalence of functional limit theorems for stationary point processes and their Palm distributions

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1989-01-01

    Let P be the distribution of a stationary point process on the real line and let P0 be its Palm distribution. In this paper we consider two types of functional limit theorems, those in terms of the number of points of the point process in (0, t] and those in terms of the location of the nth point

  14. A reconfigurable strategy for distributed digital process control

    International Nuclear Information System (INIS)

    Garcia, H.E.; Ray, A.; Edwards, R.M.

    1990-01-01

    A reconfigurable control scheme is proposed which, unlike a preprogrammed one, uses stochastic automata to learn the current operating status of the environment (i.e., the plant, controller, and communication network) by dynamically monitoring the system performance and then switching to the appropriate controller on the basis of these observations. The potential applicability of this reconfigurable control scheme to electric power plants is being investigated. The plant under consideration is the Experimental Breeder Reactor (EBR-II) at the Argonne National Laboratory site in Idaho. The distributed control system is emulated on a ring network where the individual subsystems are hosted as follows: (1) the reconfigurable control modules are located in one of the network modules called Multifunction Controller; (2) the learning modules are resident in a VAX 11/785 mainframe computer; and (3) a detailed model of the plant under control is executed in the same mainframe. This configuration is a true representation of the network-based control system in the sense that it operates in real time and is capable of interacting with the actual plant

  15. Automation of the CFD Process on Distributed Computing Systems

    Science.gov (United States)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  16. Marketing promotion in the consumer goods’ retail distribution process

    Directory of Open Access Journals (Sweden)

    S.Bălăşescu

    2013-06-01

    Full Text Available The fundamental characteristic of contemporary marketing is the total opening towards three major directions: consumer needs, organization needs and society’s needs. The continuous expansion of marketing has been accompanied by a process of differentiation and specialization. Differentiation has led to the so called “specific marketing”. In this paper, we aim to explain that in the retail companies, the concept of sales marketing can be distinguished as an independent marketing specialization. The main objectives for this paper are: the definition and delimitation of consumer goods’ sales marketing in the retail business and the sectoral approach of the marketing concept and its specific techniques for the retail activities.

  17. Elucidating the Role of Transport Processes in Leaf Glucosinolate Distribution

    DEFF Research Database (Denmark)

    Madsen, Svend Roesen; Olsen, Carl Erik; Nour-Eldin, Hussam Hassan

    2014-01-01

    in Arabidopsis, also play key roles in glucosinolate allocation within a mature leaf by effectively importing apoplastically localized glucosinolates into appropriate cells. Detection of glucosinolates in root xylem sap unambiguously shows that this transport route is involved in root-to-shoot glucosinolate...... that the margin accumulation is established through transport, little is known about these transport processes. Here, we show through leaf apoplastic fluid analysis and glucosinolate feeding experiments that two glucosinolate transporters, GTR1 and GTR2, essential for long-distance transport of glucosinolates...... allocation. Detailed leaf dissections show that in the absence of GTR1 and GTR2 transport activity, glucosinolates accumulate predominantly in leaf margins and leaf tips. Furthermore, we show that glucosinolates accumulate in the leaf abaxial epidermis in a GTR-independent manner. Based on our results, we...

  18. Data processing and distribution in the PAMELA experiment

    International Nuclear Information System (INIS)

    Casolino, M.; Nagni, M.

    2007-01-01

    YODA is a semi-automated data handling and analysis system for the PAMELA space experiment. The core of the routines have been developed to process a stream of raw data downlinked from the Resurs DK1 satellite (housing PAMELA) to the ground station in Moscow. Raw data consist of scientific data and engineering information. Housekeeping information are analyzed in a short time from download (∼hours) in order to monitor the status of the experiment and for the mission planning. A prototype for the data visualization runs on an APACHE TOMCAT web application server, providing an off-line analysis tool using a browser and part of code for the system maintenance. A quicklook system with GUI interface is used for operator monitoring and fast macrocommand issuing. On a longer timescale scientific data are analyzed, calibrations performed and the database adjourned. The data storage core is composed of CERN's ROOT files structure and MySQL as a relational database. YODA++ is currently being used in the integration and testing of ground PAMELA data

  19. Convergence in distribution for filtering processes associated to Hidden Markov Models with densities

    OpenAIRE

    Kaijser, Thomas

    2013-01-01

    A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filter...

  20. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    Science.gov (United States)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  1. Climate driven crop planting date in the ACME Land Model (ALM): Impacts on productivity and yield

    Science.gov (United States)

    Drewniak, B.

    2017-12-01

    Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical

  2. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  3. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically...

  4. Studies on distributed sensing and processing for the control of large flexible spacecraft

    Science.gov (United States)

    Montgomery, Raymond C.; Ghosh, Dave

    1991-01-01

    Technology is being developed to process signals from distributed sensors using distributed computations. These distributed sensors provide a new feedback capability for vibration control that has not been exploited. Additionally, the sensors proposed are of an optical and distributed nature and could be employed with known techniques of distributed optical computation (Fourier optics, etc.) to accomplish the control system functions of filtering and regulation in a distributed computer. This paper reviews a procedure for the analytic design of control systems for this application. For illustration, the procedure is applied to the problem of suppressing the vibrations of a simply supported beam. A simulator has been developed to study the effects of sensor and processing errors. An extensive study of the effects of these errors on estimation and regulation performance is presented.

  5. Distributed processing in receivers based on tensor for cooperative communications systems

    OpenAIRE

    Igor FlÃvio SimÃes de Sousa

    2014-01-01

    In this dissertation, we present a distributed data estimation and detection approach for the uplink of a network that uses CDMA at transmitters (users). The analyzed network can be represented by an undirected and connected graph, where the nodes use a distributed estimation algorithm based on consensus averaging to perform joint channel and symbol estimation using a receiver based on tensor signal processing. The centralized receiver, developed for a central base station, and the distribute...

  6. A NEW INITIATIVE FOR TILING, STITCHING AND PROCESSING GEOSPATIAL BIG DATA IN DISTRIBUTED COMPUTING ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    A. Olasz

    2016-06-01

    Full Text Available Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/ developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu. The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows written in different development frameworks (e.g. Python, R or C#. It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  7. a New Initiative for Tiling, Stitching and Processing Geospatial Big Data in Distributed Computing Environments

    Science.gov (United States)

    Olasz, A.; Nguyen Thai, B.; Kristóf, D.

    2016-06-01

    Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  8. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  9. Bridging the gap between a stationary point process and its Palm distribution

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1994-01-01

    In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to

  10. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  11. Responses of Mixed-Phase Cloud Condensates and Cloud Radiative Effects to Ice Nucleating Particle Concentrations in NCAR CAM5 and DOE ACME Climate Models

    Science.gov (United States)

    Liu, X.; Shi, Y.; Wu, M.; Zhang, K.

    2017-12-01

    Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.

  12. Building enterprise systems with ODP an introduction to open distributed processing

    CERN Document Server

    Linington, Peter F; Tanaka, Akira; Vallecillo, Antonio

    2011-01-01

    The Reference Model of Open Distributed Processing (RM-ODP) is an international standard that provides a solid basis for describing and building widely distributed systems and applications in a systematic way. It stresses the need to build these systems with evolution in mind by identifying the concerns of major stakeholders and then expressing the design as a series of linked viewpoints. Although RM-ODP has been a standard for more than ten years, many practitioners are still unaware of it. Building Enterprise Systems with ODP: An Introduction to Open Distributed Processing offers a gentle pa

  13. An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2016-01-01

    Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.

  14. The influence of emotion on lexical processing: insights from RT distributional analysis.

    Science.gov (United States)

    Yap, Melvin J; Seow, Cui Shan

    2014-04-01

    In two lexical decision experiments, the present study was designed to examine emotional valence effects on visual lexical decision (standard and go/no-go) performance, using traditional analyses of means and distributional analyses of response times. Consistent with an earlier study by Kousta, Vinson, and Vigliocco (Cognition 112:473-481, 2009), we found that emotional words (both negative and positive) were responded to faster than neutral words. Finer-grained distributional analyses further revealed that the facilitation afforded by valence was reflected by a combination of distributional shifting and an increase in the slow tail of the distribution. This suggests that emotional valence effects in lexical decision are unlikely to be entirely mediated by early, preconscious processes, which are associated with pure distributional shifting. Instead, our results suggest a dissociation between early preconscious processes and a later, more task-specific effect that is driven by feedback from semantically rich representations.

  15. Distribution and Molecular Characterization of Campylobacter Species at Different Processing Stages in Two Poultry Processing Plants.

    Science.gov (United States)

    Lee, Soo-Kyoung; Park, Hyun-Jung; Lee, Jin-Hee; Lim, Jong-Soo; Seo, Kun-Ho; Heo, Eun-Jeong; Kim, Young-Jo; Wee, Sung-Hwan; Moon, Jin-San

    2017-03-01

    The present study analyzed the prevalence and molecular characterization of Campylobacter at different processing steps in poultry slaughterhouses to determine where contamination mainly occurs. A total of 1,040 samples were collected at four different stages (preprocessing cloacal swabs, postevisceration, postwashing, and postchilling) in two processing plants. Campylobacter was detected in 5.8% (15 of 260) of the cloacal swabs and in 13.3% (104 of 780) of the processing samples. In both plants, the sampling points with the greatest contamination rates were after evisceration (20.5% and 15.4% for plants A and B, respectively) and significantly decreased after chilling (p Campylobacter contamination was achieved through the sequential processing procedures in both plants. Campylobacter loads (>10 3 colony-forming units [CFUs]/mL) also decreased from 41.7% at evisceration to 20.0% in final carcasses. The genetic relationships of isolates were analyzed by the automated repetitive sequence-based polymerase chain reaction (rep-PCR) system, and the rep-PCR banding pattern was found to be unrelated to the processing plants, species, sampling point, or sampling day. As the gap in the intervention efficacy remains between plant A and B despite several consistencies, a national program for monitoring critical processing stages in poultry processing plants is recommended for the successful exportation of Korean-processed white mini broiler meat.

  16. Communicational Architecture and Computational Processing Robustness on Distributed Controller System inReconfigurable Brachiating Space Robot

    Science.gov (United States)

    Yamamoto, Hiroshi; Matunaga, Saburo

    Reconfigurable Brachiating space Robot consists of three 6-DOF arms to support various kinds of external vehicle activities by changing its arm configuration. This kind of robots requires topology-change adaptation in communicational system as well as mechanical composition. Distributed controller system is employed to realize its objectives and this paper discusses its communicational architecture that we have designed. Moreover, fault resilience method in the distributed system with several micro processing units is proposed. It targets realizing high availability on data processing function using process takeover and parallelism by software.

  17. DISCO - A concept of a system for integrated data base management in distributed data processing systems

    International Nuclear Information System (INIS)

    Holler, E.

    1980-01-01

    The development in data processing technology favors the trend towards distributed data processing systems: The large-scale integration of semiconductor devices has lead to very efficient (approx. 10 6 operations per second) and relatively cheap low end computers being offered today, that allow to install distributed data processing systems with a total capacity coming near to that of large-scale data processing plants at a tolerable investment expenditure. The technologies of communication and data banks, each by itself, have reached a state of development justifying their routine application. This is made evident by the present efforts for standardization in both areas. The integration of both technologies in the development of systems for integrated distributed data bank management, however, is new territory for engineering. (orig.) [de

  18. Distributed system for parallel data processing of ECT signals for electromagnetic flaw detection in materials

    International Nuclear Information System (INIS)

    Guliashki, Vassil; Marinova, Galia

    2002-01-01

    The paper proposes a distributed system for parallel data processing of ECT signals for flaw detection in materials. The measured data are stored in files on a host computer, where a JAVA server is located. The host computer is connected through Internet to a set of client computers, distributed geographically. The data are distributed from the host computer by means of the JAVA server to the client computers according their requests. The software necessary for the data processing is installed on each client computer in advance. The organization of the data processing on many computers, working simultaneously in parallel, leads to great time reducing, especially in cases when huge amount of data should be processed in very short time. (Author)

  19. Characterization of the mass distribution of Slovak brown coal after size reduction processes

    Directory of Open Access Journals (Sweden)

    Turčániová Ľudmila

    2000-09-01

    Full Text Available Distribution of pulverised particles is in general affected by the fragmentation process, initial size distribution, energy input, number of fracturing events, etc. and have been studied for several decades. Empirical studies of crushing and grinding by the mineral processing industry provide a major source of information on the distributions. There are many statistical relations describing the distributions of particles: between the number of particles and their size, or the particle mass and size.The aim of this paper is to reveal the fractal relation in the mass distribution of coal samples from locality Cíge¾ after size reduction processes-crushing and grinding. The acquired data can be obtained from sieve analyse, where the particles are distributed to various fractions. The fractal distribution is characterized by the fractal dimension D, that can be determined from the gradient of the graph of ln M against ln r, where M denotes the cumulative mass of all particles with the size less then r. It is useful to specify the range over which the fractal relation is a good fit to the experimental data. The range is bounded by the upper and lower limit on the particle size. From the obtained values it can be concluded that the value of fractal dimension for the ground sample is higher, due to higher number of reducing events.

  20. On the joint distribution of excursion duration and amplitude of a narrow-band Gaussian process

    DEFF Research Database (Denmark)

    Ghane, Mahdi; Gao, Zhen; Blanke, Mogens

    2018-01-01

    The probability density of crest amplitude and of duration of exceeding a given level are used in many theoretical and practical problems in engineering. The joint density is essential for design of constructions that are subjected to waves and wind. The presently available joint distributions...... of amplitude and period are limited to excursion through a mean-level or to describe the asymptotic behavior of high level excursions. This paper extends the knowledge by presenting a theoretical derivation of probability of wave exceedance amplitude and duration, for a narrow-band Gaussian process...... distribution, as expected, and that the marginal distribution of excursion duration works both for asymptotic and non-asymptotic cases. The suggested model is found to be a good replacement for the empirical distributions that are widely used. Results from simulations of narrow-band Gaussian processes, real...

  1. Distribution of chirality in the quantum walk: Markov process and entanglement

    International Nuclear Information System (INIS)

    Romanelli, Alejandro

    2010-01-01

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible to find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.

  2. On the Control of Automatic Processes: A Parallel Distributed Processing Account of the Stroop Effect

    Science.gov (United States)

    1989-11-22

    identify by bloCk number) FIELD GRU 0- P atmtct cognitive psychology J modelling I Stroop task 19 ABSTRACT (Conw on reverse if neCessay and Iientify by... Stroop phenomenon and its use in the study of perceptual, cognitive , and response processes. Memory and Cognition , 1, 106-120. Fraisse, P. (1969). Why is...Performance, 5, 189-207. Logan, G. D. (1980). Attention and automaticity in Stroop and priming tasks: Theory and data. Cognitive Psychology, 12, 523

  3. Problem of uniqueness in the renewal process generated by the uniform distribution

    Directory of Open Access Journals (Sweden)

    D. Ugrin-Šparac

    1992-01-01

    Full Text Available The renewal process generated by the uniform distribution, when interpreted as a transformation of the uniform distribution into a discrete distribution, gives rise to the question of uniqueness of the inverse image. The paper deals with a particular problem from the described domain, that arose in the construction of a complex stochastic test intended to evaluate pseudo-random number generators. The connection of the treated problem with the question of a unique integral representation of Gamma-function is also mentioned.

  4. Cumulative distribution functions associated with bubble-nucleation processes in cavitation

    KAUST Repository

    Watanabe, Hiroshi

    2010-11-15

    Bubble-nucleation processes of a Lennard-Jones liquid are studied by molecular dynamics simulations. Waiting time, which is the lifetime of a superheated liquid, is determined for several system sizes, and the apparent finite-size effect of the nucleation rate is observed. From the cumulative distribution function of the nucleation events, the bubble-nucleation process is found to be not a simple Poisson process but a Poisson process with an additional relaxation time. The parameters of the exponential distribution associated with the process are determined by taking the relaxation time into account, and the apparent finite-size effect is removed. These results imply that the use of the arithmetic mean of the waiting time until a bubble grows to the critical size leads to an incorrect estimation of the nucleation rate. © 2010 The American Physical Society.

  5. Effect of almond processing on levels and distribution of aflatoxins in finished products and byproducts.

    Science.gov (United States)

    Zivoli, Rosanna; Gambacorta, Lucia; Perrone, Giancarlo; Solfrizzo, Michele

    2014-06-18

    The fate of aflatoxins during processing of contaminated almonds into nougat, pastries, and almond syrup was evaluated by testing the effect of each processing step (blanching, peeling, roasting, caramelization, cooking, and water infusion) on the distribution and levels of aflatoxins. Blanching and peeling did not reduce total aflatoxins that were distributed between peeled almonds (90-93%) and skins (7-10%). Roasting of peeled almonds reduced up to 50% of aflatoxins. Up to 70% reduction of aflatoxins was observed during preparation and cooking of almond nougat in caramelized sugar. Aflatoxins were substantially stable during preparation and cooking of almond pastries. The whole process of almond syrup preparation produced a marked increase of total aflatoxins (up to 270%) that were distributed between syrup (18-25%) and spent almonds (75-82%). The increase of total aflatoxins was probably due to the activation of almond enzymes during the infusion step that released free aflatoxins from masked aflatoxins.

  6. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  7. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    Science.gov (United States)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  8. The brain as a distributed intelligent processing system: an EEG study.

    Science.gov (United States)

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-03-15

    Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Wechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. The present results support these claims and the neural efficiency hypothesis.

  9. The brain as a distributed intelligent processing system: an EEG study.

    Directory of Open Access Journals (Sweden)

    Armando Freitas da Rocha

    Full Text Available BACKGROUND: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS, first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. METHODOLOGY AND PRINCIPAL FINDINGS: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Wechsler Adult Intelligence Scale and WISC (Wechsler Intelligence Scale for Children, and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. CONCLUSION: The present results support these claims and the neural efficiency hypothesis.

  10. The Brain as a Distributed Intelligent Processing System: An EEG Study

    Science.gov (United States)

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-01-01

    Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657

  11. Simulation of business processes of processing and distribution of orders in transportation

    Directory of Open Access Journals (Sweden)

    Ольга Ігорівна Проніна

    2017-06-01

    Full Text Available Analyzing modern passenger transportation in Ukraine, we can conclude that with the increasing number of urban population the necessity to develop passenger traffic, as well as to improve the quality of transport services is increasing too. The paper examines the three existing models of private passenger transportation (taxi: a model with the use of dispatching service, without dispatching service model and a mixed model. An algorithm of getting an order, processing it, and its implementation according to the given model has been considered. Several arrangements schemes that characterize the operation of the system have been shown in the work as well. The interrelation of the client making an order and the driver who receives the order and executes it has been represented, the server being a connecting link between the customer and the driver and regulating the system as a whole. Business process of private passenger transportation without dispatching service was simulated. Basing on the simulation results it was proposed to supplement the model of private transportation by the making advice system, as well as improving the car selection algorithm. Advice system provides the optimum choice of the car, taking into account a lot of factors. And it will also make it possible to use more efficiently the specific additional services provided by the drivers. Due to the optimization of the order handling process it becomes possible to increase the capacity of the drivers thus increasing their profits. Passenger transportation without the use of dispatching service has some weak points and they were identified. Application of the system will improve transport structure in modern conditions, and improve the transportation basing on modern operating system

  12. Comparison of Environment Impact between Conventional and Cold Chain Management System in Paprika Distribution Process

    Directory of Open Access Journals (Sweden)

    Eidelweijs A Putri

    2012-09-01

    Full Text Available Pasir Langu village in Cisarua, West Java, is the largest central production area of paprika in Indonesia. On average, for every 200 kilograms of paprika produced, there is rejection amounting to 3 kilograms. This resulted in money loss for wholesalers and wastes. In one year, this amount can be approximately 11.7 million Indonesian rupiahs. Recently, paprika wholesalers in Pasir Langu village recently are developing cold chain management system to maintain quality of paprika so that number of rejection can be reduced. The objective of this study is to compare environmental impacts between conventional and cold chain management system in paprika distribution process using Life Cycle Assessment (LCA methodology and propose Photovoltaic (PV system in paprika distribution process. The result implies that the cold chain system produces more CO2 emission compared to conventional system. However, due to the promotion of PV system, the emission would be reduced. For future research, it is necessary to reduce CO2 emission from transportation process since this process is biggest contributor of CO2 emission at whole distribution process. Keywords: LCA, environmentally friendly distribution, paprika,cold chain, PV system

  13. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo......, missing data, prediction, spatial-temporal process....

  14. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  15. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  16. Distributed process control system for remote control and monitoring of the TFTR tritium systems

    International Nuclear Information System (INIS)

    Schobert, G.; Arnold, N.; Bashore, D.; Mika, R.; Oliaro, G.

    1989-01-01

    This paper reviews the progress made in the application of a commercially available distributed process control system to support the requirements established for the Tritium REmote Control And Monitoring System (TRECAMS) of the Tokamak Fusion Test REactor (TFTR). The system that will discussed was purchased from Texas (TI) Instruments Automation Controls Division), previously marketed by Rexnord Automation. It consists of three, fully redundant, distributed process controllers interfaced to over 1800 analog and digital I/O points. The operator consoles located throughout the facility are supported by four Digital Equipment Corporation (DEC) PDP-11/73 computers. The PDP-11/73's and the three process controllers communicate over a fully redundant one megabaud fiber optic network. All system functionality is based on a set of completely integrated databases loaded to the process controllers and the PDP-11/73's. (author). 2 refs.; 2 figs

  17. Estimating the transmission potential of supercritical processes based on the final size distribution of minor outbreaks.

    Science.gov (United States)

    Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J

    2012-02-07

    Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  18. Distributed real time data processing architecture for the TJ-II data acquisition system

    International Nuclear Information System (INIS)

    Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Vega, J.; Sanchez, E.

    2004-01-01

    This article describes the performance of a new model of architecture that has been developed for the TJ-II data acquisition system in order to increase its real time data processing capabilities. The current model consists of several compact PCI extension for instrumentation (PXI) standard chassis, each one with various digitizers. In this architecture, the data processing capability is restricted to the PXI controller's own performance. The controller must share its CPU resources between the data processing and the data acquisition tasks. In the new model, distributed data processing architecture has been developed. The solution adds one or more processing cards to each PXI chassis. This way it is possible to plan how to distribute the data processing of all acquired signals among the processing cards and the available resources of the PXI controller. This model allows scalability of the system. More or less processing cards can be added based on the requirements of the system. The processing algorithms are implemented in LabVIEW (from National Instruments), providing efficiency and time-saving application development when compared with other efficient solutions

  19. Fabrication of 93.7 m long PLD-EuBCO + BaHfO3 coated conductors with 103 A/cm W at 77 K under 3 T

    Science.gov (United States)

    Yoshida, T.; Ibi, A.; Takahashi, T.; Yoshizumi, M.; Izumi, T.; Shiohara, Y.

    2015-11-01

    Introduction of artificial pinning centers such as BaHfO3 (BHO), BaZrO3 (BZO) and BaSnO3 (BSO) into REBa2Cu3O7-δ (REBCO) coated conductor (CC) layers could improve the in-field critical currents (Ic) in wide ranges of temperatures and magnetic fields. In particular, a combination of EuBCO + BHO has been found to be effective for attaining high in-field Ic performance by means of IBAD/PLD process in short length samples. In this work, we have successfully fabricated a 93.7 m long EuBCO + BHO CC with 103 A/cm W at 77 K under a magnet field (B) of 3 T applied perpendicular to the CC (B//c). The 93.7 m long EuBCO + BHO CC had high uniformity of Ic values and n-values without any trend of fluctuations, independent of the external field up to 0.3 T. Ic-B-applied angle (θ) profiles of the 93.7 m long EuBCO + BHO CC sample showed the high in-field Ic values in all directions of applied magnetic fields especially B//c (at θ ∼ 180°, Ic = 157 A/cm W) at 77 K under 3 T. The profiles were about the same as those in a short length sample.

  20. Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)

    Science.gov (United States)

    Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline

    2008-01-01

    The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...

  1. Innovation as a distributed, collaborative process of knowledge generation: open, networked innovation

    NARCIS (Netherlands)

    Sloep, Peter

    2009-01-01

    Sloep, P. B. (2009). Innovation as a distributed, collaborative process of knowledge generation: open, networked innovation. In V. Hornung-Prähauser & M. Luckmann (Eds.), Kreativität und Innovationskompetenz im digitalen Netz - Creativity and Innovation Competencies in the Web, Sammlung von

  2. Distribution and interplay of geologic processes on Titan from Cassini radar data

    Science.gov (United States)

    Lopes, R.M.C.; Stofan, E.R.; Peckyno, R.; Radebaugh, J.; Mitchell, K.L.; Mitri, Giuseppe; Wood, C.A.; Kirk, R.L.; Wall, S.D.; Lunine, J.I.; Hayes, A.; Lorenz, R.; Farr, Tom; Wye, L.; Craig, J.; Ollerenshaw, R.J.; Janssen, M.; LeGall, A.; Paganelli, F.; West, R.; Stiles, B.; Callahan, P.; Anderson, Y.; Valora, P.; Soderblom, L.

    2010-01-01

    The Cassini Titan Radar Mapper is providing an unprecedented view of Titan's surface geology. Here we use Synthetic Aperture Radar (SAR) image swaths (Ta-T30) obtained from October 2004 to December 2007 to infer the geologic processes that have shaped Titan's surface. These SAR swaths cover about 20% of the surface, at a spatial resolution ranging from ???350 m to ???2 km. The SAR data are distributed over a wide latitudinal and longitudinal range, enabling some conclusions to be drawn about the global distribution of processes. They reveal a geologically complex surface that has been modified by all the major geologic processes seen on Earth - volcanism, tectonism, impact cratering, and erosion and deposition by fluvial and aeolian activity. In this paper, we map geomorphological units from SAR data and analyze their areal distribution and relative ages of modification in order to infer the geologic evolution of Titan's surface. We find that dunes and hummocky and mountainous terrains are more widespread than lakes, putative cryovolcanic features, mottled plains, and craters and crateriform structures that may be due to impact. Undifferentiated plains are the largest areal unit; their origin is uncertain. In terms of latitudinal distribution, dunes and hummocky and mountainous terrains are located mostly at low latitudes (less than 30??), with no dunes being present above 60??. Channels formed by fluvial activity are present at all latitudes, but lakes are at high latitudes only. Crateriform structures that may have been formed by impact appear to be uniformly distributed with latitude, but the well-preserved impact craters are all located at low latitudes, possibly indicating that more resurfacing has occurred at higher latitudes. Cryovolcanic features are not ubiquitous, and are mostly located between 30?? and 60?? north. We examine temporal relationships between units wherever possible, and conclude that aeolian and fluvial/pluvial/lacustrine processes are the

  3. A distributed process monitoring system for nuclear powered electrical generating facilities

    International Nuclear Information System (INIS)

    Sweney, A.D.

    1991-01-01

    Duke Power Company is one of the largest investor owned utilities in the United States, with a service area of 20,000 square miles extending across North and South Carolina. Oconee Nuclear Station, one of Duke Power's three nuclear generating facilities, is a three unit pressurized water reactor site and has, over the course of its 15-year operating lifetime, effectively run out of plant processing capability. From a severely overcrowded cable spread room to an aging overtaxed Operator Aid Computer, the problems with trying to add additional process variables to the present centralized Operator Aid Computer are almost insurmountable obstacles. This paper reports that for this reason, and to realize the inherent benefits of a distributed process monitoring and control system, Oconee has embarked on a project to demonstrate the ability of a distributed system to perform in the nuclear power plant environment

  4. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  5. Quasi-stationary distributions for birth-death processes with killing

    Directory of Open Access Journals (Sweden)

    Pauline Coolen-Schrijner

    2006-01-01

    Full Text Available The Karlin-McGregor representation for the transition probabilities of a birth-death process with an absorbing bottom state involves a sequence of orthogonal polynomials and the corresponding measure. This representation can be generalized to a setting in which a transition to the absorbing state (killing is possible from any state rather than just one state. The purpose of this paper is to investigate to what extent properties of birth-death processes, in particular with regard to the existence of quasi-stationary distributions, remain valid in the generalized setting. It turns out that the elegant structure of the theory of quasi-stationarity for birth-death processes remains largely intact as long as killing is possible from only finitely many states. In particular, the existence of a quasi-stationary distribution is ensured in this case if absorption is certain and the state probabilities tend to zero exponentially fast.

  6. Modelling spatiotemporal distribution patterns of earthworms in order to indicate hydrological soil processes

    Science.gov (United States)

    Palm, Juliane; Klaus, Julian; van Schaik, Loes; Zehe, Erwin; Schröder, Boris

    2010-05-01

    Soils provide central ecosystem functions in recycling nutrients, detoxifying harmful chemicals as well as regulating microclimate and local hydrological processes. The internal regulation of these functions and therefore the development of healthy and fertile soils mainly depend on the functional diversity of plants and animals. Soil organisms drive essential processes such as litter decomposition, nutrient cycling, water dynamics, and soil structure formation. Disturbances by different soil management practices (e.g., soil tillage, fertilization, pesticide application) affect the distribution and abundance of soil organisms and hence influence regulating processes. The strong relationship between environmental conditions and soil organisms gives us the opportunity to link spatiotemporal distribution patterns of indicator species with the potential provision of essential soil processes on different scales. Earthworms are key organisms for soil function and affect, among other things, water dynamics and solute transport in soils. Through their burrowing activity, earthworms increase the number of macropores by building semi-permanent burrow systems. In the unsaturated zone, earthworm burrows act as preferential flow pathways and affect water infiltration, surface-, subsurface- and matrix flow as well as the transport of water and solutes into deeper soil layers. Thereby different ecological earthworm types have different importance. Deep burrowing anecic earthworm species (e.g., Lumbricus terrestris) affect the vertical flow and thus increase the risk of potential contamination of ground water with agrochemicals. In contrast, horizontal burrowing endogeic (e.g., Aporrectodea caliginosa) and epigeic species (e.g., Lumbricus rubellus) increase water conductivity and the diffuse distribution of water and solutes in the upper soil layers. The question which processes are more relevant is pivotal for soil management and risk assessment. Thus, finding relevant

  7. Radial transport processes as a precursor to particle deposition in drinking water distribution systems.

    Science.gov (United States)

    van Thienen, P; Vreeburg, J H G; Blokker, E J M

    2011-02-01

    Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.

  8. Robustness of trait distribution metrics for community assembly studies under the uncertainties of assembly processes.

    Science.gov (United States)

    Aiba, Masahiro; Katabuchi, Masatoshi; Takafumi, Hino; Matsuzaki, Shin-Ichiro S; Sasaki, Takehiro; Hiura, Tsutom

    2013-12-01

    Numerous studies have revealed the existence of nonrandom trait distribution patterns as a sign of environmental filtering and/or biotic interactions in a community assembly process. A number of metrics with various algorithms have been used to detect these patterns without any clear guidelines. Although some studies have compared their statistical powers, the differences in performance among the metrics under the conditions close to actual studies are not clear. Therefore, the performances of five metrics of convergence and 16 metrics of divergence under alternative conditions were comparatively analyzed using a suite of simulated communities. We focused particularly on the robustness of the performances to conditions that are often uncertain and uncontrollable in actual studies; e.g., atypical trait distribution patterns stemming from the operation of multiple assembly mechanisms, a scaling of trait-function relationships, and a sufficiency of analyzed traits. Most tested metrics, for either convergence or divergence, had sufficient statistical power to distinguish nonrandom trait distribution patterns without uncertainty. However, the performances of the metrics were considerably influenced by both atypical trait distribution patterns and other uncertainties. Influences from these uncertainties varied among the metrics of different algorithms and their performances were often complementary. Therefore, under the uncertainties of an assembly process, the selection of appropriate metrics and the combined use of complementary metrics are critically important to reliably distinguish nonrandom patterns in a trait distribution. We provide a tentative list of recommended metrics for future studies.

  9. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Directory of Open Access Journals (Sweden)

    Patrick Hennig

    Full Text Available BACKGROUND: Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. METHODOLOGY/PRINCIPAL FINDINGS: We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model, an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model requires smaller changes in input resistance in the inhibited neurons during visual stimulation. CONCLUSIONS/SIGNIFICANCE: Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the

  10. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  11. High-precision work distributions for extreme nonequilibrium processes in large systems.

    Science.gov (United States)

    Hartmann, Alexander K

    2014-05-01

    The distributions of work for strongly nonequilibrium processes are studied using a very general form of a large-deviation approach, which allows one to study distributions down to extremely small probabilities of almost arbitrary quantities of interest for equilibrium, nonequilibrium stationary, and even nonstationary processes. The method is applied to quickly vary the external field in a wide range B = 3 ↔ 0 for a critical (T = 2.269) two-dimensional Ising system of size L × L = 128 × 128. To obtain free-energy differences from the work distributions, they must be studied in ranges where the probabilities are as small as 10^{-240}, which is not possible using direct simulation approaches. By comparison with the exact free energies, which are available for this model for the zero-field case, one sees that the present approach allows one to obtain the free energy with a very high relative precision of 10^{-4}. This works well also for a nonzero field, i.e., for a case where standard umbrella-sampling methods are not efficient to calculate free energies. Furthermore, for the present case it is verified that the resulting distributions of work for forward and backward processes fulfill Crooks theorem with high precision. Finally, the free energy for the Ising magnet as a function of the field strength is obtained.

  12. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing)

    Science.gov (United States)

    Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi

    2018-02-01

    The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  13. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing

    Directory of Open Access Journals (Sweden)

    Samsinar Riza

    2018-01-01

    Full Text Available The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  14. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  15. Process for forming integral edge seals in porous gas distribution plates utilizing a vibratory means

    Science.gov (United States)

    Feigenbaum, Haim (Inventor); Pudick, Sheldon (Inventor)

    1988-01-01

    A process for forming an integral edge seal in a gas distribution plate for use in a fuel cell. A seal layer is formed along an edge of a porous gas distribution plate by impregnating the pores in the layer with a material adapted to provide a seal which is operative dry or when wetted by an electrolyte of a fuel cell. Vibratory energy is supplied to the sealing material during the step of impregnating the pores to provide a more uniform seal throughout the cross section of the plate.

  16. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined.

  17. The Particle Distribution in Liquid Metal with Ceramic Particles Mould Filling Process

    Science.gov (United States)

    Dong, Qi; Xing, Shu-ming

    2017-09-01

    Adding ceramic particles in the plate hammer is an effective method to increase the wear resistance of the hammer. The liquid phase method is based on the “with the flow of mixed liquid forging composite preparation of ZTA ceramic particle reinforced high chromium cast iron hammer. Preparation method for this system is using CFD simulation analysis the particles distribution of flow mixing and filling process. Taking the 30% volume fraction of ZTA ceramic composite of high chromium cast iron hammer as example, by changing the speed of liquid metal viscosity to control and make reasonable predictions of particles distribution before solidification.

  18. Distributed control and monitoring of high-level trigger processes on the LHCb online farm

    CERN Document Server

    Vannerem, P; Jost, B; Neufeld, N

    2003-01-01

    The on-line data taking of the LHCb experiment at the future LHC collider will be controlled by a fully integrated and distributed Experiment Control System (ECS). The ECS will supervise both the detector operation (DCS) and the trigger and data acquisition (DAQ) activities of the experiment. These tasks require a large distributed information management system. The aim of this paper is to show how the control and monitoring of software processes such as trigger algorithms are integrated in the ECS of LHCb.

  19. Distribution ratios on Dowex 50W resins of metal leached in the caron nickel recovery process

    International Nuclear Information System (INIS)

    Reynolds, B.A.; Metsa, J.C.; Mullins, M.E.

    1980-05-01

    Pressurized ion exchange on Dowex 50W-X8 and 50W-X12 resins was investigated using elution techniques to determine distribution ratios for copper, nickel, and cobalt complexes contained in ammonium carbonate solution, a mixture which approximates the waste liquor from the Caron nickel recovery process. Results were determined for different feed concentrations, as well as for different concentrations and pH values of the ammonium carbonate eluant. Distribution ratios were compared with those previously obtained from a continuous annular chromatographic system. Separation of copper and nickel was not conclusively observed at any of the conditions examined

  20. Enabling Chemistry of Gases and Aerosols for Assessment of Short-Lived Climate Forcers: Improving Solar Radiation Modeling in the DOE-ACME and CESM models

    Energy Technology Data Exchange (ETDEWEB)

    Prather, Michael [University of California, Irvine

    2018-01-12

    This proposal seeks to maintain the DOE-ACME (offshoot of CESM) as one of the leading CCMs to evaluate near-term climate mitigation. It will implement, test, and optimize the new UCI photolysis codes within CESM CAM5 and new CAM versions in ACME. Fast-J is a high-order-accuracy (8 stream) code for calculating solar scattering and absorption in a single column atmosphere containing clouds, aerosols, and gases that was developed at UCI and implemented in CAM5 under the previous BER/SciDAC grant.

  1. Research on distributed optical fiber sensing data processing method based on LabVIEW

    Science.gov (United States)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  2. Development of laboratory and process sensors to monitor particle size distribution of industrial slurries

    Energy Technology Data Exchange (ETDEWEB)

    Pendse, H.P.

    1992-10-01

    In this paper we present a novel measurement technique for monitoring particle size distributions of industrial colloidal slurries based on ultrasonic spectroscopy and mathematical deconvolution. An on-line sensor prototype has been developed and tested extensively in laboratory and production settings using mineral pigment slurries. Evaluation to date shows that the sensor is capable of providing particle size distributions, without any assumptions regarding their functional form, over diameters ranging from 0.1 to 100 micrometers in slurries with particle concentrations of 10 to 50 volume percents. The newly developed on-line sensor allows one to obtain particle size distributions of commonly encountered inorganic pigment slurries under industrial processing conditions without dilution.

  3. Time-headway distribution for periodic totally asymmetric exclusion process with various updates

    Science.gov (United States)

    Hrabák, P.; Krbálek, M.

    2016-03-01

    The totally asymmetric exclusion process (TASEP) with periodic boundaries is considered as traffic flow model. The large-L approximation of the stationary state is used for the derivation of the time-headway distribution (an important microscopic characteristic of traffic flow) for the model with generalized update (genTASEP) in both, forward- and backward-sequential representations. The usually used updates, fully-parallel and regular forward- and backward-sequential, are analyzed as special cases of the genTASEP. It is shown that only for those cases, the time-headway distribution is determined by the flow regardless to the density. The qualitative comparison of the results with traffic data demonstrates that the genTASEP with backward order and attractive interaction evinces similar properties of time-headway distribution as the real traffic sample.

  4. The birth-death-mutation process: a new paradigm for fat tailed distributions.

    Science.gov (United States)

    Maruvka, Yosef E; Kessler, David A; Shnerb, Nadav M

    2011-01-01

    Fat tailed statistics and power-laws are ubiquitous in many complex systems. Usually the appearance of of a few anomalously successful individuals (bio-species, investors, websites) is interpreted as reflecting some inherent "quality" (fitness, talent, giftedness) as in Darwin's theory of natural selection. Here we adopt the opposite, "neutral", outlook, suggesting that the main factor explaining success is merely luck. The statistics emerging from the neutral birth-death-mutation (BDM) process is shown to fit marvelously many empirical distributions. While previous neutral theories have focused on the power-law tail, our theory economically and accurately explains the entire distribution. We thus suggest the BDM distribution as a standard neutral model: effects of fitness and selection are to be identified by substantial deviations from it.

  5. Energy Consumption in the Process of Excavator-Automobile Complexes Distribution at Kuzbass Open Pit Mines

    Directory of Open Access Journals (Sweden)

    Panachev Ivan

    2017-01-01

    Full Text Available Every year worldwide coal mining companies seek to maintain the tendency of the mining machine fleet renewal. Various activities to maintain the service life of already operated mining equipment are implemented. In this regard, the urgent issue is the problem of efficient distribution of available machines in different geological conditions. The problem of “excavator-automobile” complex effective distribution occurs when heavy dump trucks are used in mining. For this reason, excavation and transportation of blasted rock mass are the most labor intensive and costly processes, considering the volume of transported overburden and coal, as well as diesel fuel, electricity, fuel and lubricants costs, consumables for repair works and downtime, etc. Currently, it is recommended to take the number of loading buckets in the range of 3 to 5, according to which the dump trucks are distributed to faces.

  6. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    Science.gov (United States)

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  7. Phase distribution measurements in narrow rectangular channels using image processing techniques

    International Nuclear Information System (INIS)

    Bentley, C.; Ruggles, A.

    1991-01-01

    Many high flux research reactor fuel assemblies are cooled by systems of parallel narrow rectangular channels. The HFIR is cooled by single phase forced convection under normal operating conditions. However, two-phase forced convection or two phase mixed convection can occur in the fueled region as a result of some hypothetical accidents. Such flow conditions would occur only at decay power levels. The system pressure would be around 0.15 MPa in such circumstances. Phase distribution of air-water flow in a narrow rectangular channel is examined using image processing techniques. Ink is added to the water and clear channel walls are used to allow high speed still photographs and video tape to be taken of the air-water flow field. Flow field images are digitized and stored in a Macintosh 2ci computer using a frame grabber board. Local grey levels are related to liquid thickness in the flow channel using a calibration fixture. Image processing shareware is used to calculate the spatially averaged liquid thickness from the image of the flow field. Time averaged spatial liquid distributions are calculated using image calculation algorithms. The spatially averaged liquid distribution is calculated from the time averaged spatial liquid distribution to formulate the combined temporally and spatially averaged fraction values. The temporally and spatially averaged liquid fractions measured using this technique compare well to those predicted from pressure gradient measurements at zero superficial liquid velocity

  8. Design and simulation of parallel and distributed architectures for images processing

    International Nuclear Information System (INIS)

    Pirson, Alain

    1990-01-01

    The exploitation of visual information requires special computers. The diversity of operations and the Computing power involved bring about structures founded on the concepts of concurrency and distributed processing. This work identifies a vision computer with an association of dedicated intelligent entities, exchanging messages according to the model of parallelism introduced by the language Occam. It puts forward an architecture of the 'enriched processor network' type. It consists of a classical multiprocessor structure where each node is provided with specific devices. These devices perform processing tasks as well as inter-nodes dialogues. Such an architecture benefits from the homogeneity of multiprocessor networks and the power of dedicated resources. Its implementation corresponds to that of a distributed structure, tasks being allocated to each Computing element. This approach culminates in an original architecture called ATILA. This modular structure is based on a transputer network supplied with vision dedicated co-processors and powerful communication devices. (author) [fr

  9. Tomographic radiotracer studies of the spatial distribution of heterogeneous geochemical transport processes

    International Nuclear Information System (INIS)

    Gruendig, Marion; Richter, Michael; Seese, Anita; Sabri, Osama

    2007-01-01

    Within the scope of the further development of geochemical transport models the consideration of the influence of the heterogeneous structures of the geological layers plays an important role. For the verification and parameter estimation of such models it is necessary to measure the heterogeneous transport and sorption processes inside the samples. Tomographic radiotracer methods (positron emission tomography (PET)) enable nondestructive spatially resolved observations of the transport processes in these layers. A special quantitative evaluation system for geoscientific PET studies was developed. Investigations of the water flow distribution in a drill core of a lignite mining dump and of the migration of Cu ions in a horizontal soil column illustrate the potential of this method. Spatial distribution functions of the flow velocity, the specific mass flow and the longitudinal dispersivity were determined on the basis of PET investigations

  10. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  11. TCP (truncated compound poisson) process for multiplicity distributions in high energy collisions

    International Nuclear Information System (INIS)

    Srivastava, P.P.

    1989-01-01

    On using the Poisson distribution truncated at zero for intermediate cluster decay in a compound Poisson process we obtain TCP distribution which describes quite well the multiplicity distributions in high energy collisions. A detailed comparison is made between TCP and NB for UA5 data. The reduced moments up to the fifth agree very well with the observed ones. The TCP curves are narrower than NB at high multiplicity tail, look narrower at very high energy and develop shoulders and oscillations which become increasingly pronounced as the energy grows. At lower energies the curves are very close to the NB ones. We also compare the parameterizations by these two distributions of the data for fixed intervals of rapidity for UA5 data and for the data (at low energy) for e sup(+) e sup(-) annihilati8on and pion-proton, discussion of compound Poisson distribution expressions of reduced moments and Poisson transforms are also given. The TCP curves and curves of the reduced moments for different values of the parameters are also presented. (author)

  12. State-Level Comparison of Processes and Timelines for Distributed Photovoltaic Interconnection in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Davidson, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Nobler, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    This report presents results from an analysis of distributed photovoltaic (PV) interconnection and deployment processes in the United States. Using data from more than 30,000 residential (up to 10 kilowatts) and small commercial (10-50 kilowatts) PV systems, installed from 2012 to 2014, we assess the range in project completion timelines nationally (across 87 utilities in 16 states) and in five states with active solar markets (Arizona, California, New Jersey, New York, and Colorado).

  13. How Are Distributed Groups Affected by an Imposed Structuring of their Decision-Making Process?

    DEFF Research Database (Denmark)

    Lundell, Anders Lorentz; Hertzum, Morten

    2011-01-01

    Groups often suffer from ineffective communication and decision making. This experimental study compares distributed groups solving a preference task with support from either a communication system or a system providing both communication and a structuring of the decision-making process. Results...... show that groups using the latter system spend more time solving the task, spend more of their time on solution analysis, spend less of their time on disorganized activity, and arrive at task solutions with less extreme preferences. Thus, the type of system affects the decision-making process as well...

  14. Characterization of the marginal distributions of Markov processes used in dynamic reliability

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available In dynamic reliability, the evolution of a system is described by a piecewise deterministic Markov process ( I t , X t t≥0 with state-space E× ℝ d , where E is finite. The main result of the present paper is the characterization of the marginal distribution of the Markov process ( I t , X t t≥0 at time t , as the unique solution of a set of explicit integro-differential equations, which can be seen as a weak form of the Chapman-Kolmogorov equation. Uniqueness is the difficult part of the result.

  15. Distributed Processing with a Mainframe-Based Hospital Information System: A Generalized Solution

    Science.gov (United States)

    Kirby, J. David; Pickett, Michael P.; Boyarsky, M. William; Stead, William W.

    1987-01-01

    Over the last two years the Medical Center Information Systems Department at Duke University Medical Center has been developing a systematic approach to distributing the processing and data involved in computerized applications at DUMC. The resulting system has been named MAPS- the Micro-ADS Processing System. A key characteristic of MAPS is that it makes it easy to execute any existing mainframe ADS application with a request from a PC. This extends the functionality of the mainframe application set to the PC without compromising the maintainability of the PC or mainframe systems.

  16. Reconfigurable Optical Signal Processing Based on a Distributed Feedback Semiconductor Optical Amplifier.

    Science.gov (United States)

    Li, Ming; Deng, Ye; Tang, Jian; Sun, Shuqian; Yao, Jianping; Azaña, José; Zhu, Ninghua

    2016-01-27

    All-optical signal processing has been considered a solution to overcome the bandwidth and speed limitations imposed by conventional electronic-based systems. Over the last few years, an impressive range of all-optical signal processors have been proposed, but few of them come with reconfigurability, a feature highly needed for practical signal processing applications. Here we propose and experimentally demonstrate an analog optical signal processor based on a phase-shifted distributed feedback semiconductor optical amplifier (DFB-SOA) and an optical filter. The proposed analog optical signal processor can be reconfigured to perform signal processing functions including ordinary differential equation solving and temporal intensity differentiation. The reconfigurability is achieved by controlling the injection currents. Our demonstration provitdes a simple and effective solution for all-optical signal processing and computing.

  17. Reconfigurable Optical Signal Processing Based on a Distributed Feedback Semiconductor Optical Amplifier

    Science.gov (United States)

    Li, Ming; Deng, Ye; Tang, Jian; Sun, Shuqian; Yao, Jianping; Azaña, José; Zhu, Ninghua

    2016-01-01

    All-optical signal processing has been considered a solution to overcome the bandwidth and speed limitations imposed by conventional electronic-based systems. Over the last few years, an impressive range of all-optical signal processors have been proposed, but few of them come with reconfigurability, a feature highly needed for practical signal processing applications. Here we propose and experimentally demonstrate an analog optical signal processor based on a phase-shifted distributed feedback semiconductor optical amplifier (DFB-SOA) and an optical filter. The proposed analog optical signal processor can be reconfigured to perform signal processing functions including ordinary differential equation solving and temporal intensity differentiation. The reconfigurability is achieved by controlling the injection currents. Our demonstration provitdes a simple and effective solution for all-optical signal processing and computing. PMID:26813252

  18. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    Science.gov (United States)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  19. A convergent model for distributed processing of Big Sensor Data in urban engineering networks

    Science.gov (United States)

    Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.

    2017-01-01

    The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.

  20. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    Directory of Open Access Journals (Sweden)

    Armando Freitas da Rocha

    2015-01-01

    Full Text Available Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei provided by each electrode of the 10/20 system about the identified si. H(ei Principal Component Analysis (PCA was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.

  1. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing.

    Science.gov (United States)

    Rocha, Armando Freitas da; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.

  2. Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC

    Science.gov (United States)

    Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.

    2016-12-01

    The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud

  3. Reduced lysis upon growth of Lactococcus lactis on galactose is a consequence of decreased binding of the autolysin AcmA

    NARCIS (Netherlands)

    Steen, Anton; Buist, Girbe; Kramer, Naomi E.; Jalving, Ruud; Benus, Germaine F. J. D.; Venema, Gerard; Kuipers, Oscar P.; Kok, Jan

    When Lactococcus lactis subsp. lactis IL1403 or L. lactis subsp. cremoris MG1363 is grown in a medium with galactose as the carbon source, the culture lyses to a lesser extent in stationary phase than when the bacteria are grown in a medium containing glucose. Expression of AcmA, the major autolysin

  4. Proceeding of the ACM/IEEE-CS Joint Conference on Digital Libraries (1st, Roanoke, Virginia, June 24-28, 2001).

    Science.gov (United States)

    Association for Computing Machinery, New York, NY.

    Papers in this Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, Virginia, June 24-28, 2001) discuss: automatic genre analysis; text categorization; automated name authority control; automatic event generation; linked active content; designing e-books for legal research; metadata harvesting; mapping the…

  5. Optimization of the Laser Hardening Process by Adapting the Intensity Distribution to Generate a Top-hat Temperature Distribution Using Freeform Optics

    Directory of Open Access Journals (Sweden)

    Fritz Klocke

    2017-06-01

    Full Text Available Laser hardening is a surface hardening process which enables high quality results due to the controllability of the energy input. The hardened area is determined by the heat distribution caused by the intensity profile of the laser beam. However, commonly used top-hat laser beams do not provide an ideal temperature profile. Therefore, in this paper the beam profile, and thus the temperature profile, is optimized using freeform optics. The intensity distribution is modified to generate a top-hat temperature profile on the surface. The results of laser hardening with the optimized distribution are thereupon compared with results using a top-hat intensity distribution.

  6. Quantum man-in-the-middle attack on the calibration process of quantum key distribution.

    Science.gov (United States)

    Fei, Yang-Yang; Meng, Xiang-Dong; Gao, Ming; Wang, Hong; Ma, Zhi

    2018-03-09

    Quantum key distribution (QKD) protocol has been proved to provide unconditionally secure key between two remote legitimate users in theory. Key distribution signals are transmitted in a quantum channel which is established by the calibration process to meet the requirement of high count rate and low error rate. All QKD security proofs implicitly assume that the quantum channel has been established securely. However, the eavesdropper may attack the calibration process to break the security assumption of QKD and provide precondition to steal information about the final key successfully. In this paper, we reveal the security risk of the calibration process of a passive-basis-choice BB84 QKD system by launching a quantum man-in-the-middle attack which intercepts all calibration signals and resends faked ones. Large temporal bit-dependent or basis-dependent detector efficiency mismatch can be induced. Then we propose a basis-dependent detector efficiency mismatch (BEM) based faked states attack on a single photon BB84 QKD to stress the threat of BEM. Moreover, the security of single photon QKD systems with BEM is studied simply and intuitively. Two effective countermeasures are suggested to remove the general security risk of the calibration process.

  7. Using Java for distributed computing in the Gaia satellite data processing

    Science.gov (United States)

    O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose

    2011-10-01

    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.

  8. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    Science.gov (United States)

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  9. Variation behavior of residual stress distribution by manufacturing processes in welded pipes of austenitic stainless steel

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Hashimoto, Tadafumi; Mochizuki, Masahito

    2012-01-01

    Stress corrosion cracking (SCC) has been observed near heat affected zone (HAZ) of primary loop recirculation pipes made of low-carbon austenitic stainless steel type 316L in the nuclear power plants. For the non-sensitization material, residual stress is the important factor of SCC, and it is generated by machining and welding. In the actual plants, welding is conducted after machining as manufacturing processes of welded pipes. It could be considered that residual stress generated by machining is varied by welding as a posterior process. This paper presents residual stress variation due to manufacturing processes of pipes using X-ray diffraction method. Residual stress distribution due to welding after machining had a local maximum stress in HAZ. Moreover, this value was higher than residual stress generated by welding or machining. Vickers hardness also had a local maximum hardness in HAZ. In order to clarify hardness variation, crystal orientation analysis with EBSD method was performed. Recovery and recrystallization were occurred by welding heat near the weld metal. These lead hardness decrease. The local maximum region showed no microstructure evolution. In this region, machined layer was remained. Therefore, the local maximum hardness was generated at machined layer. The local maximum stress was caused by the superposition effect of residual stress distributions due to machining and welding. Moreover, these local maximum residual stress and hardness are exceeded critical value of SCC initiation. In order to clarify the effect of residual stress on SCC initiation, evaluation including manufacturing processes is important. (author)

  10. CASTOR: Widely Distributed Scalable Infospaces

    Science.gov (United States)

    2008-11-01

    Howie Shrobe, Jonathan Bachrach, Lester Foster. To Appear in Proceedings of the 2006 International Workshop on Wireless Ad-hoc and Sensor Networks...technology platform. ACM Queue, 4(4), May 2006. [16] S. D. Gribble, E. A. Brewer, J. M. Hellerstein, and D. E. Culler . Scalable, distributed data...Apache Software Foundation. Apache Axis, 2006. http://ws.apache.org/axis/. [29] M. Welsh, D. E. Culler , and E. A. Brewer. SEDA: An Ar- chitecture for

  11. The spatial distribution of microfabric around gravel grains: indicator of till formation processes

    Science.gov (United States)

    KalväNs, Andis; Saks, Tomas

    2010-05-01

    Till micromorphology studies in thin sections is an established tool in the field of glacial geology. Often the thin sections are inspected only visually with help of mineralogical microscope. This can lead to subjective interpretation of observed structures. More objective method used in till micromorphology is measurement of apparent microfabric, usually seen as preferred orientation of elongated sand grains. In theses studies only small fraction of elongated sand grains often confined to small area of thin section usually are measured. We present a method for automated measurement of almost all elongated sand grains across the full area of the thin section. Apparently elongated sand grains are measured using simple image analysis tools, the data are processed in a way similar to regular till fabric data and visualised as a grid of rose diagrams. The method allows to draw statistical information about spatial variation of microfabric preferred orientation and fabric strength with resolution as fine as 1 mm. Late Weichselian tills from several sites in Western Latvia were studied and large variations in fabric strength and spatial distribution were observed in macroscopically similar till units. The observed types of microfabric spatial distributions include strong, monomodal and uniform distribution; weak and highly variable in small distances distribution; consistently bimodal distribution and domain-like pattern of preferred sand grain orientation. We suggest that the method can be readily used to identify the basic deformation and sedimentation processes active during the final stages of till formation. It is understood that the microfabric orientation will be significant affected by nearby large particles. The till is highly heterogonous sediment and the source of microfabric perturbations observed in thin section might lie outside the section plane. Therefore we suggest that microfabric distribution around visible sources of perturbation - gravel grains cut

  12. AcmA of Lactococcus lactis, a cell-binding major autolysin

    NARCIS (Netherlands)

    Buist, Girbe

    1997-01-01

    onsidering the amount of daily consumed foods which are produced by means of fermentation, such as breads, wines, beers, cheeses, fermented vegetables/fruits and sausages, the economical importance of these biotechnological processes can be hardly overestimated. Lactic acid bacteria (LAB) play an

  13. USL NASA/RECON project presentations at the 1985 ACM Computer Science Conference: Abstracts and visuals

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Chum, Frank Y.; Gallagher, Suzy; Granier, Martin; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros

    1985-01-01

    This Working Paper Series entry represents the abstracts and visuals associated with presentations delivered by six USL NASA/RECON research team members at the above named conference. The presentations highlight various aspects of NASA contract activities pursued by the participants as they relate to individual research projects. The titles of the six presentations are as follows: (1) The Specification and Design of a Distributed Workstation; (2) An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval; (3) Critical Comparative Analysis of the Major Commercial IS and R Systems; (4) Design Criteria for a PC-Based Common User Interface to Remote Information Systems; (5) The Design of an Object-Oriented Graphics Interface; and (6) Knowledge-Based Information Retrieval: Techniques and Applications.

  14. The "step feature" of suprathermal ion distributions: a discriminator between acceleration processes?

    Directory of Open Access Journals (Sweden)

    H. J. Fahr

    2012-09-01

    Full Text Available The discussion of exactly which process is causing the preferred build-up of v−5-power law tails of the velocity distribution of suprathermal particles in the solar wind is still ongoing. Criteria allowing one to discriminate between the various suggestions that have been made would be useful in order to clarify the physics behind these tails. With this study, we draw the attention to the so-called "step feature" of the velocity distributions and offer a criterion that allows one to distinguish between those scenarios that employ velocity diffusion, i.e. second-order Fermi processes, which are prime candidates in the present debate. With an analytical approximation to the self-consistently obtained velocity diffusion coefficient, we solve the transport equation for suprathermal particles. The numerical simulation reveals that this form of the diffusion coefficient naturally leads to the step feature of the velocity distributions. This finding favours – at least in regions of the appearance of the step feature (i.e. for heliocentric distances up to about 11 AU and at lower energies – the standard velocity diffusion as a consequence of the particle's interactions with the plasma wave turbulence as opposed to that caused by velocity fluctuation-induced compressions and rarefactions.

  15. Interevent Time Distribution of Renewal Point Process, Case Study: Extreme Rainfall in South Sulawesi

    Science.gov (United States)

    Sunusi, Nurtiti

    2018-03-01

    The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.

  16. Distributed cognition and process management enabling individualized translational research: The NIH Undiagnosed Diseases Program experience

    Directory of Open Access Journals (Sweden)

    Amanda E Links

    2016-10-01

    Full Text Available The National Institutes of Health Undiagnosed Diseases Program (NIH UDP applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similarly complex problems are resolvable through process management and the distributed cognition of communities. The team therefore built the NIH UDP Integrated Collaboration System (UDPICS to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement.

  17. Drop Distribution Determination in a Liquid-Liquid Dispersion by Image Processing

    Directory of Open Access Journals (Sweden)

    Luís M. R. Brás

    2009-01-01

    Full Text Available This paper presents the implementation of an algorithm for automatic identification of drops with different sizes in monochromatic digitized frames of a liquid-liquid chemical process. These image frames were obtained at our Laboratory, using a nonintrusive process, with a digital video camera, a microscope, and an illumination setup from a dispersion of toluene in water within a transparent mixing vessel. In this implementation, we propose a two-phase approach, using a Hough transform that automatically identifies drops in images of the chemical process. This work is a promising starting point for the possibility of performing an automatic drop classification with good results. Our algorithm for the analysis and interpretation of digitized images will be used for the calculation of particle size and shape distributions for modelling liquid-liquid systems.

  18. INTELLIGENT MONITORING SYSTEM WITH HIGH TEMPERATURE DISTRIBUTED FIBEROPTIC SENSOR FOR POWER PLANT COMBUSTION PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Kwang Y. Lee; Stuart S. Yin; Andre Boheman

    2003-12-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatial resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, the efforts focused on developing an innovative high temperature distributed fiber optic sensor by fabricating in-fiber gratings in single crystal sapphire fibers. So far, our major accomplishments include: Successfully grown alumina cladding layers on single crystal sapphire fibers, successfully fabricated in-fiber gratings in single crystal sapphire fibers, and successfully developed a high temperature distributed fiber optic sensor. Under Task 2, the emphasis has been on putting into place a computational capability for simulation of combustors. A PC workstation was acquired with dual Xeon processors and sufficient memory to support 3-D calculations. An existing license for Fluent software was expanded to include two PC processes, where the existing license was for a Unix workstation. Under Task 3, intelligent state estimation theory is being developed which will map the set of 1D (located judiciously within a 3D environment) measurement data into a 3D temperature profile. This theory presents a semigroup

  19. Hydraulic experimental investigation on spatial distribution and formation process of tsunami deposit on a slope

    Science.gov (United States)

    Harada, K.; Takahashi, T.; Yamamoto, A.; Sakuraba, M.; Nojima, K.

    2017-12-01

    An important aim of the study of tsunami deposits is to estimate the characteristics of past tsunamis from the tsunami deposits found locally. Based on the tsunami characteristics estimated from tsunami deposit, it is possible to examine tsunami risk assessment in coastal areas. It is considered that tsunami deposits are formed based on the dynamic correlation between tsunami's hydraulic values, sediment particle size, topography, etc. However, it is currently not enough to evaluate the characteristics of tsunamis from tsunami deposits. This is considered to be one of the reasons that the understanding of the formation process of tsunami deposits is not sufficiently understood. In this study, we analyze the measurement results of hydraulic experiment (Yamamoto et al., 2016) and focus on the formation process and distribution of tsunami deposits. Hydraulic experiment was conducted with two-dimensional water channel with a slope. Tsunami was inputted as a bore wave flow. The moving floor section was installed as a seabed slope connecting to shoreline and grain size distribution was set some cases. The water level was measured using ultrasonic displacement gauges, and the flow velocity was measured using propeller current meters and an electromagnetic current meter. The water level and flow velocity was measured at some points. The distribution of tsunami deposit was measured from shoreline to run-up limit on the slope. Yamamoto et al. (2016) reported the measurement results on the distribution of tsunami deposit with wave height and sand grain size. Therefore, in this study, hydraulic analysis of tsunami sediment formation process was examined based on the measurement data. Time series fluctuation of hydraulic parameters such as Froude number, Shields number, Rouse number etc. was calculated to understand on the formation process of tsunami deposit. In the front part of the tsunami, the flow velocity take strong flow from shoreline to around the middle of slope. From

  20. Analysis the Transient Process of Wind Power Resources when there are Voltage Sags in Distribution Grid

    Science.gov (United States)

    Nhu Y, Do

    2018-03-01

    Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.

  1. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  2. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...

  3. Grain size distribution and heat conductivity of copper processed by equal channel angular pressing

    International Nuclear Information System (INIS)

    Gendelman, O.V.; Shapiro, M.; Estrin, Y.; Hellmig, R.J.; Lekhtmakher, S.

    2006-01-01

    We report the results of measurements of the grain size distribution function and the thermal conductivity of ultrafine-grained copper produced by equal channel angular pressing (ECAP), with special attention to the evolution of these quantities with the number of pressing cycles. To explain the experimental findings, the equilibrium grain size distribution function (GSDF) evolving during ECAP has been calculated on the basis of a simplified theoretical model. The model involves a single unknown physical parameter-the most probable grain size. With this parameter fitted to the experimental data the calculated GSDF fairly closely reproduces the experimental data. A model for thermal conductivity of ECAP processed copper has been proposed, which relates thermal conductivity to the GSDF parameters and the coefficient of electron reflection at grain boundaries

  4. Secured Session-key Distribution using control Vector Encryption / Decryption Process

    International Nuclear Information System (INIS)

    Ismail Jabiullah, M.; Abdullah Al-Shamim; Khaleqdad Khan, ANM; Lutfar Rahman, M.

    2006-01-01

    Frequent key changes are very much desirable for the secret communications and are thus in high demand. A session-key distribution technique has been designed and implemented using the programming language C on which the communication between the end-users is encrypted is used for the duration of a logical connection. Each session-key is obtained from the key distribution center (KDC) over the same networking facilities used for end-user communication. The control vector is cryptographically coupled with the session-key at the time of key generation in the KDC. For this, the generated hash function, master key and the session-key are used for producing the encrypted session-key, which has to be transferred. All the operations have been performed using the C programming language. This process can be widely applicable to all sorts of electronic transactions online or offline; commercially and academically.(authors)

  5. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  6. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  7. Number size distribution of fine and ultrafine fume particles from various welding processes.

    Science.gov (United States)

    Brand, Peter; Lenz, Klaus; Reisgen, Uwe; Kraus, Thomas

    2013-04-01

    Studies in the field of environmental epidemiology indicate that for the adverse effect of inhaled particles not only particle mass is crucial but also particle size is. Ultrafine particles with diameters below 100 nm are of special interest since these particles have high surface area to mass ratio and have properties which differ from those of larger particles. In this paper, particle size distributions of various welding and joining techniques were measured close to the welding process using a fast mobility particle sizer (FMPS). It turned out that welding processes with high mass emission rates (manual metal arc welding, metal active gas welding, metal inert gas welding, metal inert gas soldering, and laser welding) show mainly agglomerated particles with diameters above 100 nm and only few particles in the size range below 50 nm (10 to 15%). Welding processes with low mass emission rates (tungsten inert gas welding and resistance spot welding) emit predominantly ultrafine particles with diameters well below 100 nm. This finding can be explained by considerably faster agglomeration processes in welding processes with high mass emission rates. Although mass emission is low for tungsten inert gas welding and resistance spot welding, due to the low particle size of the fume, these processes cannot be labeled as toxicologically irrelevant and should be further investigated.

  8. Process capability index Cpk for monitoring the thermal performance in the distribution of refrigerated products

    Directory of Open Access Journals (Sweden)

    Antonio Galvão Naclério Novaes

    2016-03-01

    Full Text Available Abstract The temperature of refrigerated products along the cold chain must be kept within pre-defined limits to ensure adequate safety levels and high product quality. Because temperature largely influences microbial activities, the continuous monitoring of the time-temperature history over the distribution process usually allows for the adequate control of the product quality along both short- and medium-distance distribution routes. Time-Temperature Indicators (TTI are composed of temperature measurements taken at various time intervals and are used to feed analytic models that monitor the impacts of temperature on product quality. Process Capability Indices (PCI, however, are calculated using TTI series to evaluate whether the thermal characteristics of the process are within the specified range. In this application, a refrigerated food delivery route is investigated using a simulated annealing algorithm that considers alternative delivery schemes. The objective of this investigation is to minimize the distance traveled while maintaining the vehicle temperature within the prescribed capability level.

  9. Wear Process Analysis of the Polytetrafluoroethylene/Kevlar Twill Fabric Based on the Components’ Distribution Characteristics

    Directory of Open Access Journals (Sweden)

    Gu Dapeng

    2017-12-01

    Full Text Available Polytetrafluoroethylene (PTFE/Kevlar fabric or fabric composites with excellent tribological properties have been considered as important materials used in bearings and bushing, for years. The components’ (PTFE, Kevlar, and the gap between PTFE and Kevlar distribution of the PTFE/Kevlar fabric is uneven due to the textile structure controlling the wear process and behavior. The components’ area ratio on the worn surface varying with the wear depth was analyzed not only by the wear experiment, but also by the theoretical calculations with our previous wear geometry model. The wear process and behavior of the PTFE/Kevlar twill fabric were investigated under dry sliding conditions against AISI 1045 steel by using a ring-on-plate tribometer. The morphologies of the worn surface were observed by the confocal laser scanning microscopy (CLSM. The wear process of the PTFE/Kevlar twill fabric was divided into five layers according to the distribution characteristics of Kevlar. It showed that the friction coefficients and wear rates changed with the wear depth, the order of the antiwear performance of the previous three layers was Layer III>Layer II>Layer I due to the area ratio variation of PTFE and Kevlar with the wear depth.

  10. Mammal Distribution in Nunavut: Inuit Harvest Data and COSEWIC's Species at Risk Assessment Process

    Directory of Open Access Journals (Sweden)

    Karen A. Kowalchuk

    2012-09-01

    Full Text Available The Committee on the Status of Endangered Wildlife in Canada (COSEWIC assesses risk potential for a species by evaluating the best available information from all knowledge sources including Aboriginal traditional knowledge (ATK. Effective application of ATK in this process has been challenging. Inuit knowledge (IK of mammal distribution in Nunavut is reflected, in part, in the harvest spatial data from two comprehensive studies: the Use and Occupancy Mapping (UOM Study conducted by the Nunavut Planning Commission (NPC and the Nunavut Wildlife Harvest Study (WHS conducted by the Nunavut Wildlife Management Board (NWMB. The geographic range values of extent of occurrence (EO and area of occupancy (AO were derived from the harvest data for a selected group of mammals and applied to Phase I of the COSEWIC assessment process. Values falling below threshold values can trigger a potential risk designation of either endangered (EN or threatened (TH for the species being assessed. The IK values and status designations were compared with available COSEWIC data. There was little congruency between the two sets of data. We conclude that there are major challenges within the risk assessment process and specifically the calculation of AO that contributed to the disparity in results. Nonetheless, this application illustrated that Inuit harvest data in Nunavut represents a unique and substantial source of ATK that should be used to enrich the knowledge base on arctic mammal distribution and enhance wildlife management and conservation planning.

  11. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    Science.gov (United States)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  12. DEVELOPMENT OF A HETEROGENIC DISTRIBUTED ENVIRONMENT FOR SPATIAL DATA PROCESSING USING CLOUD TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. S. Garov

    2016-06-01

    Full Text Available We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  13. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    Science.gov (United States)

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (geoportal"target="_blank">http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  14. New method of analyzing wave processes in pulse generators based on lines with distributed parameters

    CERN Document Server

    Gordeev, V S

    2001-01-01

    A new method of theoretical analysis of wave processes in high-current pulse generators through the relations between integral values reflecting regularities of energy transfer in ideal lines with distributed parameters is described. The use of the method developed considerably simplifies the procedure of searching for an optimal - from the point of view of getting maximal efficiency - relation of impedances for pulse facilities on stepped lines including those with arbitrary number of cascades. High efficiency of the method is demonstrated by several examples.

  15. Distribution of lanthanides in the aquatic environment of a rare earths processing plant

    International Nuclear Information System (INIS)

    Radhakrishnan, Sujata; Paul, A.C.

    1997-01-01

    During the chemical processing of monazite to separate rare earths from thorium liquid effluents are generated which after treatment are discharged to Periyar river. Distribution of lanthanide elements such as La, Ce, Nd, Pr and Sm have been investigated in Periyar river. The concentrations of these elements in the river at the industrial zone show enhancement by factors ranging from 4 to 8. The concentration factors in the river sediment is of the order of 10 4 . The mass median diameter work out to 10 μm in the sediment at the industrial zone. Water hyacinth concentrates the elements in its roots. 4 refs., 3 tabs., 1 fig

  16. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    Science.gov (United States)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the

  17. Efficient bit sifting scheme of post-processing in quantum key distribution

    Science.gov (United States)

    Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong

    2015-10-01

    Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.

  18. Impact of process parameters and design options on heat leaks of straight cryogenic distribution lines

    CERN Document Server

    Duda, Pawel; Chorowski, Maciej Pawel; Polinski, J

    2017-01-01

    The Future Circular Collider (FCC) accelerator will require a helium distribution system that will exceed the presently exploited transfer lines by almost 1 order of magnitude. The helium transfer line will contain five process pipes protected against heat leaks by a common thermal shield. The design pressure of the FCC process pipe with supercritical helium will be equal to 5.0 MPa, significantly exceeding the 2.0 MPa value in the present, state-of–art transfer lines. The increase of the design pressure requires construction changes to be introduced to the support system, the vacuum barriers and the compensation bellows. This will influence heat flows to the helium. The paper analyses the impact of the increased design pressure on the heat flow. The paper also offers a discussion of the design modifications to the compensation system, including the replacement of stainless steel with Invar—aimed at mitigating the pressure increase.

  19. Impact of process parameters and design options on heat leaks of straight cryogenic distribution lines

    Directory of Open Access Journals (Sweden)

    P. Duda

    2017-03-01

    Full Text Available The Future Circular Collider (FCC accelerator will require a helium distribution system that will exceed the presently exploited transfer lines by almost 1 order of magnitude. The helium transfer line will contain five process pipes protected against heat leaks by a common thermal shield. The design pressure of the FCC process pipe with supercritical helium will be equal to 5.0 MPa, significantly exceeding the 2.0 MPa value in the present, state-of–art transfer lines. The increase of the design pressure requires construction changes to be introduced to the support system, the vacuum barriers and the compensation bellows. This will influence heat flows to the helium. The paper analyses the impact of the increased design pressure on the heat flow. The paper also offers a discussion of the design modifications to the compensation system, including the replacement of stainless steel with Invar®—aimed at mitigating the pressure increase.

  20. ATLAS computing system commissioning: real-time data processing and distribution tests

    CERN Document Server

    Nairz, A; Branco, M; Cameron, D; Salgado, P; Barberis, D; Bos, K; Poulard, G

    2008-01-01

    The ATLAS experiment is commissioning its computing system in preparation for LHC data. Part of this activity consists in testing the data flow from the online data acquisition to the offline processing system, and the distribution of raw and processed data to the external computing centres. A series of functional and rate tests has been performed in 2006 and 2007, allowing the optimisation of the hardware and software components of this system; the last phase of commissioning, the so-called Final Dress Rehearsal, consisting of an integration tests of all components, will take place later in 2007. This paper describes the performed tests, the problems that we encountered, and the solutions we found.

  1. GLN standard as a facilitator of physical location identification within process of distribution

    Directory of Open Access Journals (Sweden)

    Davor Dujak

    2017-09-01

    Full Text Available Background: Distribution, from the business point of view, is a set of decisions and actions that will provide the right products at the right time and place, in line with customer expectations. It is a process that generates significant cost, but also effectively implemented, significantly affects the positive perception of the company. Institute of Logistics and Warehousing (IliM, based on the research results related to the optimization of the distribution network and consulting projects for companies, indicates the high importance of the correct description of the physical location within the supply chains in order to make transport processes more effective. Individual companies work on their own geocoding of warehouse locations and location of their business partners (suppliers, customers, but the lack of standardization in this area causes delays related to delivery problems with reaching the right destination. Furthermore, the cooperating companies do not have a precise indication of the operating conditions of each location, e.g. Time windows of the plant, logistic units accepted at parties, supported transport etc. Lack of this information generates additional costs associated with re-operation and the costs of lost benefits for the lack of goods on time. The solution to this problem seems to be a wide-scale implementation of GS1 standard which is the Global Location Number (GLN, that, thanks to a broad base of information will assist the distribution processes. Material and methods: The results of survey conducted among Polish companies in the second half of 2016 indicate an unsatisfactory degree of implementation of the transport processes, resulting from incorrect or inaccurate description of the location, and thus, a significant number of errors in deliveries. Accordingly, authors studied literature and examined case studies indicating the possibility of using GLN standard to identify the physical location and to show the

  2. New model of Brazilian electric sector: implications of sugarcane bagasse on the distributed generation process

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Celso E.L. de; Rabi, Jose A. [Universidade de Sao Paulo (GREEN/FZEA/USP), Pirassununga, SP (Brazil). Fac. de Zootecnia e Engenharia de Alimentos. Grupo de Pesquisa em Reciclagem, Eficiencia Energetica e Simulacao Numerica], Emails: celsooli@usp.br, jrabi@usp.br; Halmeman, Maria Cristina [Universidade Estadual Paulista (FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas

    2008-07-01

    Distributed generation has become an alternative for the lack of resources to large energy projects and for recent facts that have changed the geopolitical panorama. The later have increased oil prices so that unconventional sources have become more and more feasible, which is an issue usually discussed in Europe and in USA. Brazil has followed such world trend by restructuring the electrical sector as well as major related institutions, from generation to commercialization and sector regulation while local legislation has enabled the increase of distributed generation. It regulates the role of the independent energy producer so as to provide direct business between the later and a great consumer, which is an essential step to enlarge energy market. Sugarcane bagasse has been used to produce both electric energy and steam and this paper analyzes and discusses the major implications of a new model for Brazilian electric sector based on sugarcane bagasse use as means to increase distributed generation process, particularly concerned with the commercialization of energy excess. (author)

  3. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    Science.gov (United States)

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  4. Analysis of dose distribution changes in radiation processing using a continuous variable F-test and p-value

    International Nuclear Information System (INIS)

    Lundahl, Brad

    2011-01-01

    A process monitoring practice is established from the evaluation of dose distribution within simulant or phantom materials. As a part of change control, an evaluation of potential changes to dose distribution is conducted when change activities occur to the irradiator. () The dose distribution evaluation is conducted to verify either the continued validity of an established process monitoring practice or demonstrate that the monitoring practice is no longer valid. Historically, change control evaluation of a process monitoring practice has been based on a non-statistical evaluation of dose distribution data for potential change. A statistical method has been developed using a continuous variable F-test and p-value, which tests a null hypothesis of no change in dose distribution, and provides a means of either substantiating or refuting the continued validity of a process monitoring practice.

  5. Process and device for determining the spatial distribution of a radioactive substance

    International Nuclear Information System (INIS)

    1977-01-01

    This invention describes a process for determining the spatial distribution of a radioactive substance consisting in determining the positions and energy losses associated to the interactions of the Compton effect and the photoelectric interactions that occur owing to the emission of gamma photons by the radioactive material and in deducing an information on the spatial distribution of the radioactive substance, depending on the positions and energy losses associated to the interactions of the Compton effect of these gamma photons and the positions and energy losses associated to the subsequent photoelectric interactions of these same photons. The invention also concerns a processing system for identifying, among the signals representing the positions and energy losses of the interactions of the Compton effect and the photoelectric interactions of the gamma photons emitted by a radioactive source, those signals that are in keeping with the gamma photons that have been subjected to an initial interaction of the Compton effect and a second and last photoelectric interaction. It further concerns a system for determining, among the identified signals, the positions of the sources of several gamma photons. This detector of Compton interaction can be used with conventional Auger-type imaging system (gamma camera) for detecting photoelectric interactions [fr

  6. Online learning of a Dirichlet process mixture of Beta-Liouville distributions via variational inference.

    Science.gov (United States)

    Fan, Wentao; Bouguila, Nizar

    2013-11-01

    A large class of problems can be formulated in terms of the clustering process. Mixture models are an increasingly important tool in statistical pattern recognition and for analyzing and clustering complex data. Two challenging aspects that should be addressed when considering mixture models are how to choose between a set of plausible models and how to estimate the model's parameters. In this paper, we address both problems simultaneously within a unified online nonparametric Bayesian framework that we develop to learn a Dirichlet process mixture of Beta-Liouville distributions (i.e., an infinite Beta-Liouville mixture model). The proposed infinite model is used for the online modeling and clustering of proportional data for which the Beta-Liouville mixture has been shown to be effective. We propose a principled approach for approximating the intractable model's posterior distribution by a tractable one-which we develop-such that all the involved mixture's parameters can be estimated simultaneously and effectively in a closed form. This is done through variational inference that enjoys important advantages, such as handling of unobserved attributes and preventing under or overfitting; we explain that in detail. The effectiveness of the proposed work is evaluated on three challenging real applications, namely facial expression recognition, behavior modeling and recognition, and dynamic textures clustering.

  7. Effects of distribution function nonequilibrium tails on relaxation and transfer processes in rarefied gases

    International Nuclear Information System (INIS)

    Grigoryev, Yu.N.; Mikhalitsyn, A.N.; Yanenko, N.N.

    1984-01-01

    Quantitative characteristics of the nonmonotone relaxation process are studied in a gas of pseudo-Maxwell molecules. Basic results are obtained by a direct numerical integration of the nonlinear Boltzmann equation. The evolution of initial distributions being finite or having exponential asymptotics of tails was researched. In particular, initial data obtained by selective excitation (absorption) against the Maxwell background encountered in laser physics problems have been considered. It is shown that under conditions of a developed effect of nonmonotone relaxation the overpopulation in the velocity range 4 <= upsilon <= 10 exceeds on the average 2-3 times the equilibrium value. For the given particles energy the excitation is preserved during t = 5/6 and the total relaxation time of the overpopulation wave reaches t asymptotically equals 20. The amplitudes and the relaxation time of overpopulation in the ''cupola'' region of distribution are substantially lower than in the case of a developed effect in the tail. The influence of the effect on the kinetics of threshold chemical reaction is studied. From the results it follows that in the process of nonmonotone relaxation the mean rates of binary threshold reactions can exceed more than twice the equilibrium values. This estimate is valid for all power like intermolecular repulsive potentials from the pseudo-Maxwell model up to rigid spheres. Time intervals over which the mean reaction rate exceeds considerably the equilibrium one make from 5 to 15 mean free path times increasing with the decrease in the potential ''rigidity''. (author)

  8. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  9. Process and installation for producing tomographic images of the distribution of a radiotracer

    International Nuclear Information System (INIS)

    Fonroget, Jacques; Brunol, Jean.

    1977-01-01

    The invention particularly concerns a process for obtaining tomographic images of an object formed by a radiotracer distributed spacially over three dimensions. This process, using a detection device with an appreciably plane detection surface and at least one collimation orifice provided in a partition between the detection surface and the object, enables tomographic sections to be obtained with an excellent three-dimensional resolution of the images achieved. It is employed to advantage in an installation that includes a detection device or gamma camera on an appreciably plane surface, a device having a series of collimation apertures which may be used in succession, these holes being appreciably distributed over a common plane parallel to the detection surface, and a holder for the object. This holder can be moved in appreciably parallel translation to the common plane. The aim of this invention is, inter alia, to meet two requirements: localization in space and obtaining good contrasts. This aim is achieved by the fact that at least one tomographic image is obtained from a series of intermediate images of the object [fr

  10. Online Learning of Hierarchical Pitman-Yor Process Mixture of Generalized Dirichlet Distributions With Feature Selection.

    Science.gov (United States)

    Fan, Wentao; Sallay, Hassen; Bouguila, Nizar

    2017-09-01

    In this paper, a novel statistical generative model based on hierarchical Pitman-Yor process and generalized Dirichlet distributions (GDs) is presented. The proposed model allows us to perform joint clustering and feature selection thanks to the interesting properties of the GD distribution. We develop an online variational inference algorithm, formulated in terms of the minimization of a Kullback-Leibler divergence, of our resulting model that tackles the problem of learning from high-dimensional examples. This variational Bayes formulation allows simultaneously estimating the parameters, determining the model's complexity, and selecting the appropriate relevant features for the clustering structure. Moreover, the proposed online learning algorithm allows data instances to be processed in a sequential manner, which is critical for large-scale and real-time applications. Experiments conducted using challenging applications, namely, scene recognition and video segmentation, where our approach is viewed as an unsupervised technique for visual learning in high-dimensional spaces, showed that the proposed approach is suitable and promising.

  11. The political economy of misusing income distribution in the electoral process: A biased pluralism approach

    Directory of Open Access Journals (Sweden)

    Praščević Aleksandra

    2017-01-01

    Full Text Available This paper examines the consequences of distribution of income on election results in conditions of biased pluralism and a small-scale electorate, when the results of the democratic electoral process are compromised due to electoral corruption. The paper discusses the most important concepts of political economy and political theory connected to achieving electoral victory through distribution and misuse of economic resources, focusing on identifying the conditions under which a democratic political system serves organized interest groups and not the majority; i.e., biased pluralism. Despite the formal equality of all voters, there are significant differences in their actual impact on election results. The democratic election process is put into question by the fact that the electorate is small and candidates have access to income that can be used to buy off ‘privileged’ voters through discretionary allocation of funds and the economic results generated by such distribution. Faced with the corrupt practice of the incumbent, the opposition candidate is driven to a similar position to gain the support of ‘privileged’ voters and win the election for the opposition. The economic and political result is that the free vote and political competition are compromised, resulting in a political hybrid, the semiauthoritarian regime. This paper provides a mathematical optimization model in which a hierarchically based organization is used as an approximation of society. In the model, differences in the position of members of the organization are similar to differences between voters in the electorate. The particle swarm optimization (PSO method is used to calculate the amount of electoral bribes. The paper also uses game theory to provide an example of voting for the person who will manage the organization. The formal game between the incumbent and the opposition candidate is presented with a discussion of the various results of the game.

  12. EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK

    Directory of Open Access Journals (Sweden)

    C. Wang

    2017-10-01

    Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping,city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  13. Analysis of flash flood processes dynamics in a Mediterranean catchment using a distributed hydrological model

    Science.gov (United States)

    Roux, H.; Maubourguet, M.-M.; Castaings, W.; Dartus, D.

    2009-09-01

    The present study aims at analyzing the hydrological processes involved in flash flood generation. It focuses on small catchments located in the Mediterranean region (Southern France) and often affected by extreme events (Gaume et al., 2009; Ruin et al., 2008). The model used in this study is a spatially distributed rainfall-runoff model dedicated to extreme event simulation and developed on the basis of physical process representation. It is structured into three modules, which represent the soil component, the overland flow component and flow through the drainage network. Infiltration is described using the Green and Ampt model and the soils are assumed vertically homogeneous. Lateral subsurface flow is based on the Darcy's law for a confined aquifer. Surface runoff calculation is divided into two parts: overland flow and flow along the drainage network. Both are simulated using the 1D kinematic wave approximation of the Saint-Venant equations with the Manning friction law. In the drainage network, the friction difference between main channel and floodplain is taken into account. Determination of model parameters requires terrain measurement data, usually issued from DEM, soil survey and vegetation or land-use. Four parameters are calibrated for the entire catchment using discharge measurements. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations, the model is then calibrated using these results to estimate the parameters with a data assimilation process called the adjoint state method (Bessière et al., 2008; Castaings et al., 2009). Flood events with different hydrometeorological characteristics are studied to compare the location of saturated areas, infiltration and runoff dynamics as well as importance of the subsurface flow. A better understanding of these processes is indeed necessary especially to improve the model efficiency when the simulation parameters cannot be calibrated and must therefore be transposed from gauged

  14. Effects of soil surface roughness on interrill erosion processes and sediment particle size distribution

    Science.gov (United States)

    Ding, Wenfeng; Huang, Chihua

    2017-10-01

    Soil surface roughness significantly impacts runoff and erosion under rainfall. Few previous studies on runoff generation focused on the effects of soil surface roughness on the sediment particle size distribution (PSD), which greatly affects interrill erosion and sedimentation processes. To address this issue, a rainfall-simulation experiment was conducted with treatments that included two different initial soil surface roughnesses and two rainfall intensities. Soil surface roughness was determined by using photogrammetric method. For each simulated event, runoff and sediment samples were collected at different experimental times. The effective (undispersed) PSD of each sediment sample and the ultimate (after dispersion) PSD were used to investigate the detachment and transport mechanisms involved in sediment movement. The results show that soil surface roughness significantly delayed runoff initiation, but had no significant effect on the steady runoff rate. However, a significant difference in the soil loss rate was observed between the smooth and rough soil surfaces. Sediments from smooth soil surfaces were more depleted in clay-size particles, but more enriched in sand-size particles than those from rough soil surfaces, suggesting that erosion was less selective on smooth than on rough soil surfaces. The ratio of different sizes of transported sediment to the soil matrix indicates that most of the clay was eroded in the form of aggregates, silt-size particles were transported mainly as primary particles, and sand-size particles were predominantly aggregates of finer particles. Soil surface roughness has a crucial effect on the sediment size distribution and erosion processes. Significant differences of the enrichment ratios for the effective PSD and the ultimate PSD were observed under the two soil surface roughness treatments. These findings demonstrate that we should consider each particle size separately rather than use only the total sediment discharge in

  15. Distributed Neural Processing Predictors of Multi-dimensional Properties of Affect

    Directory of Open Access Journals (Sweden)

    Keith A. Bush

    2017-09-01

    Full Text Available Recent evidence suggests that emotions have a distributed neural representation, which has significant implications for our understanding of the mechanisms underlying emotion regulation and dysregulation as well as the potential targets available for neuromodulation-based emotion therapeutics. This work adds to this evidence by testing the distribution of neural representations underlying the affective dimensions of valence and arousal using representational models that vary in both the degree and the nature of their distribution. We used multi-voxel pattern classification (MVPC to identify whole-brain patterns of functional magnetic resonance imaging (fMRI-derived neural activations that reliably predicted dimensional properties of affect (valence and arousal for visual stimuli viewed by a normative sample (n = 32 of demographically diverse, healthy adults. Inter-subject leave-one-out cross-validation showed whole-brain MVPC significantly predicted (p < 0.001 binarized normative ratings of valence (positive vs. negative, 59% accuracy and arousal (high vs. low, 56% accuracy. We also conducted group-level univariate general linear modeling (GLM analyses to identify brain regions whose response significantly differed for the contrasts of positive versus negative valence or high versus low arousal. Multivoxel pattern classifiers using voxels drawn from all identified regions of interest (all-ROIs exhibited mixed performance; arousal was predicted significantly better than chance but worse than the whole-brain classifier, whereas valence was not predicted significantly better than chance. Multivoxel classifiers derived using individual ROIs generally performed no better than chance. Although performance of the all-ROI classifier improved with larger ROIs (generated by relaxing the clustering threshold, performance was still poorer than the whole-brain classifier. These findings support a highly distributed model of neural processing for the

  16. Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network

    OpenAIRE

    Tomohiko Konno

    2013-01-01

    The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).

  17. Schema architecture and their relationships to transaction processing in distributed database systems

    NARCIS (Netherlands)

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  18. Watershed processes, fish habitat, and salmonid distribution in the Tonsina River (Copper River watershed), Alaska

    Science.gov (United States)

    Booth, D. B.; Ligon, F. K.; Sloat, M. R.; Amerson, B.; Ralph, S. C.

    2007-12-01

    The Copper River watershed is a critical resource for northeastern Pacific salmon, with annual escapements in the millions. The Tonsina River basin, a diverse 2100-km2 tributary to the Copper River that supports important salmonid populations, offers an opportunity to integrate watershed-scale channel network data with field reconnaissance of physical processes and observed distribution of salmonid species. Our long-term goals are to characterize habitats critical to different salmonid life stages, describe the geologic context and current geologic processes that support those habitats in key channel reaches, and predict their watershed-wide distribution. The overarching motivation for these goals is resource conservation, particularly in the face of increased human activity and long-term climate change. Channel geomorphology within the Tonsina River basin reflects inherited glacial topography. Combinations of drainage areas, slopes, channel confinement, and sediment-delivery processes are unique to this environment, giving rise to channel "types" that are recognizable but that do not occur in the same positions in the channel network as in nonglaciated landscapes. We also recognize certain channel forms providing fish habitat without analog in a nonglacial landscape, notably relict floodplain potholes from once-stranded and long-melted ice blocks. Salmonid species dominated different channel types within the watershed network. Sockeye salmon juveniles were abundant in the low-gradient, turbid mainstem; Chinook juveniles were also captured in the lower mainstem, with abundant evidence of spawning farther downstream. Coho juveniles were abundant in upper, relatively large tributaries, even those channels with cobble-boulder substrates and minimal woody debris that provide habitats more commonly utilized by Chinook in low-latitude systems. More detailed field sampling also revealed that patterns of species composition and abundance appeared related to small

  19. Word and face processing engage overlapping distributed networks: Evidence from RSVP and EEG investigations.

    Science.gov (United States)

    Robinson, Amanda K; Plaut, David C; Behrmann, Marlene

    2017-07-01

    Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. A novel method for determination of particle size distribution in-process

    Science.gov (United States)

    Salaoru, Tiberiu A.; Li, Mingzhong; Wilkinson, Derek

    2009-07-01

    The pharmaceutical and fine chemicals industries are strongly concerned with the manufacture of high value-added speciality products, often in solid form. On-line measurement of solid particle size is vital for reliable control of product properties. The established techniques, such as laser diffraction or spectral extinction, require dilution of the process suspension when measuring from typical manufacturing streams because of their high concentration. Dilution to facilitate measurement can result in changes of both size and form of particles, especially during production processes such as crystallisation. In spectral extinction, the degree of light scattering and absorption by a suspension is measured. However, for concentrated suspensions the interpretation of light extinction measurements is difficult because of multiple scattering and inter-particle interaction effects and at higher concentrations extinction is essentially total so the technique can no longer be applied. At the same time, scattering by a dispersion also causes a change of phase which affects the real component of the suspension's effective refractive index which is a function of particle size and particle and dispersant refractive indices. In this work, a novel prototype instrument has been developed to measure particle size distribution in concentrated suspensions in-process by measuring suspension refractive index at incidence angles near the onset of total internal reflection. Using this technique, the light beam does not pass through the suspension being measured so suspension turbidity does not impair the measurement.

  1. Snow-borne nanosized particles: Abundance, distribution, composition, and significance in ice nucleation processes

    Science.gov (United States)

    Rangel-Alvarado, Rodrigo Benjamin; Nazarenko, Yevgen; Ariya, Parisa A.

    2015-11-01

    Physicochemical processes of nucleation constitute a major uncertainty in understanding aerosol-cloud interactions. To improve the knowledge of the ice nucleation process, we characterized physical, chemical, and biological properties of fresh snow using a suite of state-of-the-art techniques based on mass spectrometry, electron microscopy, chromatography, and optical particle sizing. Samples were collected at two North American Arctic sites, as part of international campaigns (2006 and 2009), and in the city of Montreal, Canada, over the last decade. Particle size distribution analyses, in the range of 3 nm to 10 µm, showed that nanosized particles are the most numerous (38-71%) in fresh snow, with a significant portion (11 to 19%) less than 100 nm in size. Particles with diameters less than 200 nm consistently exhibited relatively high ice-nucleating properties (on average ranged from -19.6 ± 2.4 to -8.1 ± 2.6°C). Chemical analysis of the nanosized fraction suggests that they contain bioorganic materials, such as amino acids, as well as inorganic compounds with similar characteristics to mineral dust. The implication of nanoparticle ubiquity and abundance in diverse snow ecosystems are discussed in the context of their importance in understanding atmospheric nucleation processes.

  2. Data Processing, Distribution and Product Characteristics for the WISE Preliminary Data Release

    Science.gov (United States)

    Cutri, Roc M.; IPAC/WISE Science Data Center Team

    2011-01-01

    The Wide-Field Infrared Survey Explorer (WISE) began its digital imaging survey of the sky in the 3.4, 4.6, 12 and 22 micron bands in January 2010. WISE completed its first complete sky coverage in July and will continue to survey until its cryogens are exhausted later this year. A preliminary Image Atlas in the four WISE bands and a corresponding Source Catalog containing accurate positions and mid-infrared photometry that cover the first 55% of the sky surveyed will be released in April 2011. The Infrared Processing and Analysis Center, California Institute of Technology is the WISE Science Data Center (WSDC) and is responsible for processing, archiving and distribution of WISE science data products. We describe the processing system that converts the raw science and engineering data into calibrated Atlas Images and Source Catalog entries, validates and archives these products, and enables access to them by the community. We also describe the general properties of the Preliminary Release data products including areal coverage, image and catalog formats and access modes.

  3. [Occurrence and distribution of volatile organic compounds in conventional and advanced drinking water treatment processes].

    Science.gov (United States)

    Chen, Xi-Chao; Luo, Qian; Chen, Hu; Wei, Zi; Wang, Zi-Jian; Xu, Ke-Wen

    2013-12-01

    A series of experiments were conducted to study the occurrence and distribution of volatile organic compounds (VOCs) in conventional and advanced drinking water treatment processes of 3 water treatment plants in Lianyungang City. Results showed that 30 compounds of 3 classes were detected from 67 kinds of VOCs in all the samples collected. The concentrations of carbonyl compounds, halogenated hydrocarbons and benzenes detected were in the ranges of 0.04-61.27, 0.02-35.61 and 0.07-2.33 microg x L(-1) respectively. Comparing the changes of different VOCs in three drinking water treatment plants, conventional chlorination process could effectively remove benzenes but meanwhile produced trihalomethanes (THMs). Additional advanced treatment ozonation-biological activated carbon process could decrease the formation of THMs during pre-chlorination but produced new risky contaminants like carbonyl compounds. The changes of VOCs in tap water were also investigated. It was found that carbonyl compounds produced by ozonation could be further transformed to THMs with residual chlorine. However, the health risks of all detected compounds in tap water were at a low level, except that the carcinogenic risk of crotonaldehydes (9.3 x 10(-5)-2.2 x 10(-4)) was slightly higher than the US EPA threshold (10(-6)-10(-4)).

  4. Mercury Distribution in the Processing of Jatiroto Gold Mine Wonogiri Central Java Indonesia

    Science.gov (United States)

    Fitri Yudiantoro, Dwi; Nurcholis, Muhammad; Sri Sayudi, Dewi; Abdurrachman, Mirzam; Paramita Haty, Intan; Pambudi, Wiryan; Subroborini, Arum

    2017-06-01

    The research area is one of the Wonogiri gold producer. In this region there are nearly 30 gold processing locations. This area has a steep morphology which is part of Mt. Mas. The work of the gold processing is a part time job besides for the local farmer population. To get the gold bearing rocks, are by digging holes manually around Mt. Mas, while gold processing is carried out in their homes. As a result of these activities, then identified the distribution of mercury in the surrounding settlements. Analytical methods used in this study is the measurement mercury content using Hg meter on altered rocks, soil and using XRF (X-Ray Fluorescence) for plant samples. This results of research shows that there are conducted on mercury contents in the altered rocks, soil and plants showed significant mercury contents in altered rocks, soil and plants. This proves that mercury has polluted the environment surrounding residents, both of people living in the hill down on the lower plain areas. The results of this study are expected to be used as reference to help overcome the pollution of the area.

  5. Fiber‐optic distributed temperature sensing: A new tool for assessment and monitoring of hydrologic processes

    Science.gov (United States)

    Lane, John W.; Day-Lewis, Frederick D.; Johnson, Carole D.; Dawson, Cian B.; Nelms, David L.; Miller, Cheryl; Wheeler, Jerrod D.; Harvey, Charles F.; Karam, Hanan N.

    2008-01-01

    Fiber‐optic distributed temperature sensing (FO DTS) is an emerging technology for characterizing and monitoring a wide range of important earth processes. FO DTS utilizes laser light to measure temperature along the entire length of standard telecommunications optical fibers. The technology can measure temperature every meter over FO cables up to 30 kilometers (km) long. Commercially available systems can measure fiber temperature as often as 4 times per minute, with thermal precision ranging from 0.1 to 0.01 °C depending on measurement integration time. In 2006, the U.S. Geological Survey initiated a project to demonstrate and evaluate DTS as a technology to support hydrologic studies. This paper demonstrates the potential of the technology to assess and monitor hydrologic processes through case‐study examples of FO DTS monitoring of stream‐aquifer interaction on the Shenandoah River near Locke's Mill, Virginia, and on Fish Creek, near Jackson Hole, Wyoming, and estuary‐aquifer interaction on Waquoit Bay, Falmouth, Massachusetts. The ability to continuously observe temperature over large spatial scales with high spatial and temporal resolution provides a new opportunity to observe and monitor a wide range of hydrologic processes with application to other disciplines including hazards, climate‐change, and ecosystem monitoring.

  6. Nanoparticles dispersion in processing functionalised PP/TiO2 nanocomposites: distribution and properties

    Science.gov (United States)

    El-Dessouky, Hassan M.; Lawrence, Carl A.

    2011-03-01

    Future innovations in textiles and fibrous materials are likely to demand fibres with enhanced multifunctionality. The fibres can be functionalized by dispersing nanoadditives into the polymer during melt compounding/spinning. TiO2 nanoparticles have the potential to improve UV resistance, antistatic, as well as impart self-cleaning by photocatalysis and thereby de-odour and antimicrobial effects. In this study, a micro-lab twin-screw extruder was used to produce samples of polypropylene (PP) nanocomposite monofilaments, doped with nano titanium oxide (TiO2)/manganese oxide (MnO) compound having size ranging from 60 to 200 nm. As a control sample, PP filaments without additives were also extruded. Three samples were produced containing different concentrations (wt%) of the TiO2 compound, i.e. 0.95, 1.24 and 1.79%. Nano metal-oxide distribution in the as-spun and drawn nanocomposite filaments was analysed. Although, there are small clusters of the nanoparticles, the characterizing techniques showed good dispersion and distribution of the modified TiO2 along and across the processed filaments. From UV spectroscopy and TGA, a significant enhancement of polypropylene UV protection and thermal stability were observed: PP with higher percentage of TiO2 absorbed UV wavelength of 387 nm and thermally decomposed at 320.16 °C accompanied by 95% weight loss.

  7. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  8. Nanoparticles dispersion in processing functionalised PP/TiO2 nanocomposites: distribution and properties

    International Nuclear Information System (INIS)

    El-Dessouky, Hassan M.; Lawrence, Carl A.

    2011-01-01

    Future innovations in textiles and fibrous materials are likely to demand fibres with enhanced multifunctionality. The fibres can be functionalized by dispersing nanoadditives into the polymer during melt compounding/spinning. TiO 2 nanoparticles have the potential to improve UV resistance, antistatic, as well as impart self-cleaning by photocatalysis and thereby de-odour and antimicrobial effects. In this study, a micro-lab twin-screw extruder was used to produce samples of polypropylene (PP) nanocomposite monofilaments, doped with nano titanium oxide (TiO 2 )/manganese oxide (MnO) compound having size ranging from 60 to 200 nm. As a control sample, PP filaments without additives were also extruded. Three samples were produced containing different concentrations (wt%) of the TiO 2 compound, i.e. 0.95, 1.24 and 1.79%. Nano metal-oxide distribution in the as-spun and drawn nanocomposite filaments was analysed. Although, there are small clusters of the nanoparticles, the characterizing techniques showed good dispersion and distribution of the modified TiO 2 along and across the processed filaments. From UV spectroscopy and TGA, a significant enhancement of polypropylene UV protection and thermal stability were observed: PP with higher percentage of TiO 2 absorbed UV wavelength of 387 nm and thermally decomposed at 320.16 °C accompanied by 95% weight loss.

  9. Industrial Qualification Process for Optical Fibers Distributed Strain and Temperature Sensing in Nuclear Waste Repositories

    Directory of Open Access Journals (Sweden)

    S. Delepine-Lesoille

    2012-01-01

    Full Text Available Temperature and strain monitoring will be implemented in the envisioned French geological repository for high- and intermediate-level long-lived nuclear wastes. Raman and Brillouin scatterings in optical fibers are efficient industrial methods to provide distributed temperature and strain measurements. Gamma radiation and hydrogen release from nuclear wastes can however affect the measurements. An industrial qualification process is successfully proposed and implemented. Induced measurement uncertainties and their physical origins are quantified. The optical fiber composition influence is assessed. Based on radiation-hard fibers and carbon-primary coatings, we showed that the proposed system can provide accurate temperature and strain measurements up to 0.5 MGy and 100% hydrogen concentration in the atmosphere, over 200 m distance range. The selected system was successfully implemented in the Andra underground laboratory, in one-to-one scale mockup of future cells, into concrete liners. We demonstrated the efficiency of simultaneous Raman and Brillouin scattering measurements to provide both strain and temperature distributed measurements. We showed that 1.3 μm working wavelength is in favor of hazardous environment monitoring.

  10. Validation Studies of Temperature Distribution and Mould Filling Process for Composite Skeleton Castings

    Directory of Open Access Journals (Sweden)

    M. Cholewa

    2007-07-01

    Full Text Available In this work authors showed selected results of simulation and experimental studies on temperature distribution during solidification of composite skeleton casting and mould filling process (Fig. 4, 5, 6. The basic subject of the computer simulation was the analysis of ability of metal to fill the channels creating the skeleton shape and prepared in form of a core. Analysis of filling for each consecutive levels of the skeleton casting was conducted for simulation results and real casting. The skeleton casting was manufactured according to proposed technology (Fig. 5. Number of fully filled nodes in simulation was higher than obtained in experimental studies. It was observed in the experiment, that metal during pouring did not flow through the whole channel section, what enabled possibilities of reducing the channel section and pointed out the necessity of local pressure increase.

  11. Contact pressure distribution during the polishing process of ceramic tiles: A laboratory investigation

    International Nuclear Information System (INIS)

    Sani, A S A; Hamedon, Z; Azhari, A; Sousa, F J P

    2016-01-01

    During the polishing process of porcelain tiles the difference in scratching speed between innermost and peripheral abrasives leads to pressure gradients linearly distributed along the radial direction of the abrasive tool. The aim of this paper is to investigate such pressure gradient in laboratory scale. For this purpose polishing tests were performed on ceramic tiles according to the industrial practices using a custom-made CNC tribometer. Gradual wear on both abrasives and machined surface of the floor tile were measured. The experimental results suggested that the pressure gradient tends to cause an inclination of the abraded surfaces, which becomes stable after a given polishing period. In addition to the wear depth of the machined surface, the highest value of gloss and finest surface finish were observed at the lowest point of the worn out surface of the ceramic floor tile corresponding to the point of highest pressure and lowest scratching speed. (paper)

  12. PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry

    International Nuclear Information System (INIS)

    Leal, A.; Sanchez-Doblado, F.; Perucha, M.; Rincon, M.; Carrasco, E.; Bernal, C.

    2001-01-01

    A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)

  13. Distributed control and data processing system with a centralized database for a BWR power plant

    International Nuclear Information System (INIS)

    Fujii, K.; Neda, T.; Kawamura, A.; Monta, K.; Satoh, K.

    1980-01-01

    Recent digital techniques based on changes in electronics and computer technologies have realized a very wide scale of computer application to BWR Power Plant control and instrumentation. Multifarious computers, from micro to mega, are introduced separately. And to get better control and instrumentation system performance, hierarchical computer complex system architecture has been developed. This paper addresses the hierarchical computer complex system architecture which enables more efficient introduction of computer systems to a Nuclear Power Plant. Distributed control and processing systems, which are the components of the hierarchical computer complex, are described in some detail, and the database for the hierarchical computer complex is also discussed. The hierarchical computer complex system has been developed and is now in the detailed design stage for actual power plant application. (auth)

  14. Distributed error and alarm processing in the CMS data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.

  15. Contact pressure distribution during the polishing process of ceramic tiles: A laboratory investigation

    Science.gov (United States)

    Sani, A. S. A.; Sousa, F. J. P.; Hamedon, Z.; Azhari, A.

    2016-02-01

    During the polishing process of porcelain tiles the difference in scratching speed between innermost and peripheral abrasives leads to pressure gradients linearly distributed along the radial direction of the abrasive tool. The aim of this paper is to investigate such pressure gradient in laboratory scale. For this purpose polishing tests were performed on ceramic tiles according to the industrial practices using a custom-made CNC tribometer. Gradual wear on both abrasives and machined surface of the floor tile were measured. The experimental results suggested that the pressure gradient tends to cause an inclination of the abraded surfaces, which becomes stable after a given polishing period. In addition to the wear depth of the machined surface, the highest value of gloss and finest surface finish were observed at the lowest point of the worn out surface of the ceramic floor tile corresponding to the point of highest pressure and lowest scratching speed.

  16. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution

    NARCIS (Netherlands)

    Colen, H.B.B.; Neef, C.; Schuring, R.W.

    2003-01-01

    Background: Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incidents reports involving medication errors. Medisch

  17. Vib--rotational energy distributions and relaxation processes in pulsed HF chemical lasers

    International Nuclear Information System (INIS)

    Ben-Shaul, A.; Kompa, K.L.; Schmailzl, U.

    1976-01-01

    The rate equations governing the temporal evolution of photon densities and level populations in pulsed F+H 2 →HF+H chemical lasers are solved for different initial conditions. The rate equations are solved simultaneously for all relevant vibrational--rotational levels and vibrational--rotational P-branch transitions. Rotational equilibrium is not assumed. Approximate expressions for the detailed state-to-state rate constants corresponding to the various energy transfer processes (V--V, V--R,T, R--R,T) coupling the vib--rotational levels are formulated on the basis of experimental data, approximate theories, and qualitative considerations. The main findings are as follows: At low pressures, R--T transfer cannot compete with the stimulated emission, and the laser output largely reflects the nonequilibrium energy distribution in the pumping reaction. The various transitions reach threshold and decay almost independently and simultaneous lasing on several lines takes place. When a buffer gas is added in excess to the reacting mixture, the enhanced rotational relaxation leads to nearly single-line operation and to the J shift in lasing. Laser efficiency is higher at high inert gas pressures owing to a better extraction of the internal energy from partially inverted populations. V--V exchange enhances lasing from upper vibrational levels but reduces the total pulse intensity. V--R,T processes reduce the efficiency but do not substantially modify the spectral output distribution. The photon yield ranges between 0.4 and 1.4 photons/HF molecule depending on the initial conditions. Comparison with experimental data, when available, is fair

  18. Acme Landfill Expansion. Appendices.

    Science.gov (United States)

    1982-01-01

    either at levels created in the receiving waters or as a result of biological concentration. C. Provision 1. The discharger shall comply with all sections...the U.S.? X aquatic animall production f8Cliity Wh,~L results a.# X (FORM 2A) discharge to wstcrs of the U.S.? zR 8)A -- To- th -a aivwhisch-crrently...or biological treatmient LITERS PILtI UAV depth ol’one foot) OR p.ociesses riot occlurrii( :r pns HECTARE-METER Surface imrntfd’verit or tmci-rep

  19. Material processing of convection-driven flow field and temperature distribution under oblique gravity

    Science.gov (United States)

    Hung, R. J.

    1995-01-01

    A set of mathematical formulation is adopted to study vapor deposition from source materials driven by heat transfer process under normal and oblique directions of gravitational acceleration with extremely low pressure environment of 10(exp -2) mm Hg. A series of time animation of the initiation and development of flow and temperature profiles during the course of vapor deposition has been obtained through the numerical computation. Computations show that the process of vapor deposition has been accomplished by the transfer of vapor through a fairly complicated flow pattern of recirculation under normal direction gravitational acceleration. It is obvious that there is no way to produce a homogeneous thin crystalline films with fine grains under such a complicated flow pattern of recirculation with a non-uniform temperature distribution under normal direction gravitational acceleration. There is no vapor deposition due to a stably stratified medium without convection for reverse normal direction gravitational acceleration. Vapor deposition under oblique direction gravitational acceleration introduces a reduced gravitational acceleration in vertical direction which is favorable to produce a homogeneous thin crystalline films. However, oblique direction gravitational acceleration also induces an unfavorable gravitational acceleration along horizontal direction which is responsible to initiate a complicated flow pattern of recirculation. In other words, it is necessary to carry out vapor deposition under a reduced gravity in the future space shuttle experiments with extremely low pressure environment to process vapor deposition with a homogeneous crystalline films with fine grains. Fluid mechanics simulation can be used as a tool to suggest most optimistic way of experiment with best setup to achieve the goal of processing best nonlinear optical materials.

  20. Relationship between fiber degradation and residence time distribution in the processing of long fiber reinforced thermoplastics

    Directory of Open Access Journals (Sweden)

    2008-08-01

    Full Text Available Long fiber reinforced thermoplastics (LFT were processed by in-line compounding equipment with a modified single screw extruder. A pulse stimulus response technique using PET spheres as the tracer was adopted to obtain residence time distribution (RTD of extrusion compounding. RTD curves were fitted by the model based on the supposition that extrusion compounding was the combination of plug flow and mixed flow. Characteristic parameters of RTD model including P the fraction of plug flow reactor (PFR and d the fraction of dead volume of continuous stirred tank reactor (CSTR were used to associate with fiber degradation presented by fiber length and dispersion. The effects of screw speed, mixing length and channel depth on RTD curves, and characteristic parameters of RTD models as well as their effects on the fiber degradation were investigated. The influence of shear force with different screw speeds and variable channel depth on fiber degradation was studied and the main impetus of fiber degradation was also presented. The optimal process for obtaining the balance of fiber length and dispersion was presented.

  1. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.

  2. Constructing regions of attainable sizes and achieving target size distribution in a batch cooling sonocrystallization process.

    Science.gov (United States)

    Bhoi, Stutee; Sarkar, Debasis

    2018-04-01

    The application of ultrasound to a crystallization process has several interesting benefits. The temperature of the crystallizer increases during ultrasonication and this makes it difficult for the temperature controller of the crystallizer to track a set temperature trajectory precisely. It is thus necessary to model this temperature rise and the temperature-trajectory tracking ability of the crystallizer controller to perform model-based dynamic optimization for a given cooling sonocrystallization set-up. In our previous study, we reported a mathematical model based on population balance framework for a batch cooling sonocrystallization of l-asparagine monohydrate (LAM). Here we extend the previous model by including energy balance equations and a Generic Model Control algorithm to simulate the temperature controller of the crystallizer that tracks a cooling profile during crystallization. The improved model yields very good closed-loop prediction and is conveniently used for studies related to particle engineering by optimization. First, the model is used to determine the regions of attainable particle sizes for LAM batch cooling sonocrystallization process by solving appropriate dynamic optimization problems. Then the model is used to determine optimal operating conditions for achieving a target crystal size distribution. The experimental evidence clearly demonstrates the efficiency of the particle engineering approach by optimization. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Distributed processing of a GPS receiver network for a regional ionosphere map

    Science.gov (United States)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  4. CLASSIFICATION OF POWER QUALITY CONSIDERING VOLTAGE SAGS IN DISTRIBUTION SYSTEMS USING KDD PROCESS

    Directory of Open Access Journals (Sweden)

    Anderson Roges Teixeira Góes

    2015-08-01

    Full Text Available In this paper, we propose a methodology to classify Power Quality (PQ in distribution systems based on voltage sags. The methodology uses the KDD process (Knowledge Discovery in Databases in order to establish a quality level to be printed in labels. The methodology was applied to feeders on a substation located in Curitiba, Paraná, Brazil, considering attributes such as sag length (remnant voltage, duration and frequency (number of occurrences on a given period of time. On the Data Mining Stage (the main stage on KDD Process, three different techniques were used, in a comparative way, for pattern recognition, in order to achieve the quality classification for the feeders: Artificial Neural Networks (ANN; Support Vector Machines (SVM and Genetic Algorithms (GA. By printing a label with quality level information, utilities companies (power concessionaires can get better organized for mitigation procedures by establishing clear targets. Moreover, the same way costumers already receive information regarding PQ based on interruptions, they will also be able to receive information based on voltage sags.

  5. Distributed representation of social odors indicates parallel processing in the antennal lobe of ants.

    Science.gov (United States)

    Brandstaetter, Andreas Simon; Kleineidam, Christoph Johannes

    2011-11-01

    In colonies of eusocial Hymenoptera cooperation is organized through social odors, and particularly ants rely on a sophisticated odor communication system. Neuronal information about odors is represented in spatial activity patterns in the primary olfactory neuropile of the insect brain, the antennal lobe (AL), which is analog to the vertebrate olfactory bulb. The olfactory system is characterized by neuroanatomical compartmentalization, yet the functional significance of this organization is unclear. Using two-photon calcium imaging, we investigated the neuronal representation of multicomponent colony odors, which the ants assess to discriminate friends (nestmates) from foes (nonnestmates). In the carpenter ant Camponotus floridanus, colony odors elicited spatial activity patterns distributed across different AL compartments. Activity patterns in response to nestmate and nonnestmate colony odors were overlapping. This was expected since both consist of the same components at differing ratios. Colony odors change over time and the nervous system has to constantly adjust for this (template reformation). Measured activity patterns were variable, and variability was higher in response to repeated nestmate than to repeated nonnestmate colony odor stimulation. Variable activity patterns may indicate neuronal plasticity within the olfactory system, which is necessary for template reformation. Our results indicate that information about colony odors is processed in parallel in different neuroanatomical compartments, using the computational power of the whole AL network. Parallel processing might be advantageous, allowing reliable discrimination of highly complex social odors.

  6. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    Science.gov (United States)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  7. A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

    Science.gov (United States)

    Martin, Adrian

    As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to

  8. Sources and processes affecting the distribution of dissolved Nd isotopes and concentrations in the West Pacific

    Science.gov (United States)

    Behrens, Melanie K.; Pahnke, Katharina; Schnetger, Bernhard; Brumsack, Hans-Jürgen

    2018-02-01

    In the Atlantic, where deep circulation is vigorous, the dissolved neodymium (Nd) isotopic composition (expressed as ɛNd) is largely controlled by water mass mixing. In contrast, the factors influencing the ɛNd distribution in the Pacific, marked by sluggish circulation, is not clear yet. Indication for regional overprints in the Pacific is given based on its bordering volcanic islands. Our study aims to clarify the impact and relative importance of different Nd sources (rivers, volcanic islands), vertical (bio)geochemical processes and lateral water mass transport in controlling dissolved ɛNd and Nd concentration ([Nd]) distributions in the West Pacific between South Korea and Fiji. We find indication for unradiogenic continental input from South Korean and Chinese rivers to the East China Sea. In the tropical West Pacific, volcanic islands supply Nd to surface and subsurface waters and modify their ɛNd to radiogenic values of up to +0.7. These radiogenic signatures allow detailed tracing of currents flowing to the east and differentiation from westward currents with open ocean Pacific ɛNd composition in the complex tropical Pacific zonal current system. Modified radiogenic ɛNd of West Pacific intermediate to bottom waters upstream or within our section also indicates non-conservative behavior of ɛNd due to boundary exchange at volcanic island margins, submarine ridges, and with hydrothermal particles. Only subsurface to deep waters (3000 m) in the open Northwest Pacific show conservative behavior of ɛNd. In contrast, we find a striking correlation of extremely low (down to 2.77 pmol/kg Nd) and laterally constant [Nd] with the high-salinity North and South Pacific Tropical Water, indicating lateral transport of preformed [Nd] from the North and South Pacific subtropical gyres into the study area. This observation also explains the previously observed low subsurface [Nd] in the tropical West Pacific. Similarly, Western South Pacific Central Water, Antarctic

  9. Distribution and biophysical processes of beaded streams in Arctic permafrost landscapes

    Science.gov (United States)

    Arp, Christopher D.; Whitman, Matthew S.; Jones, Benjamin M.; Grosse, Guido; Gaglioti, Benjamin V.; Heim, Kurt C.

    2015-01-01

    Beaded streams are widespread in permafrost regions and are considered a common thermokarst landform. However, little is known about their distribution, how and under what conditions they form, and how their intriguing morphology translates to ecosystem functions and habitat. Here we report on a Circum-Arctic survey of beaded streams and a watershed-scale analysis in northern Alaska using remote sensing and field studies. We mapped over 400 channel networks with beaded morphology throughout the continuous permafrost zone of northern Alaska, Canada, and Russia and found the highest abundance associated with medium- to high- ground ice content permafrost in moderately sloping terrain. In the Fish Creek watershed, beaded streams accounted for half of the drainage density, occurring primarily as low-order channels initiating from lakes and drained lake basins. Beaded streams predictably transition to alluvial channels with increasing drainage area and decreasing channel slope, although this transition is modified by local controls on water and sediment delivery. Comparison of one beaded channel using repeat photography between 1948 and 2013 indicate a relatively stable landform and 14C dating of basal sediments suggest channel formation may be as early as the Pleistocene-Holocene transition. Contemporary processes, such as deep snow accumulation in riparian zones effectively insulates channel ice and allows for perennial liquid water below most beaded stream pools. Because of this, mean annual temperatures in pool beds are greater than 2°C, leading to the development of perennial thaw bulbs or taliks underlying these thermokarst features. In the summer, some pools thermally stratify, which reduces permafrost thaw and maintains coldwater habitats. Snowmelt generated peak-flows decrease rapidly by two or more orders of magnitude to summer low flows with slow reach-scale velocity distributions ranging from 0.1 to 0.01 m/s, yet channel runs still move water rapidly

  10. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  11. The imperial conquest and reordering of the production, processing, distribution and consumption of food: a theoretical contribution

    NARCIS (Netherlands)

    Ploeg, van der J.D.

    2008-01-01

    The imperial conquest and reordering of the production, processing, distribution and consumption of food: a theoretical contribution - Empire is a new mode of ordering and governance. Food empires are monopolistic networks that control large, and expanding, parts of the production, processing,

  12. A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.

    2010-12-01

    High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to

  13. Processes and their explanatory factors governing distribution of organic phosphorous pools in lake sediments.

    Science.gov (United States)

    Lü, Changwei; He, Jiang; Zuo, Le; Vogt, Rolf D; Zhu, Liang; Zhou, Bin; Mohr, Christian W; Guan, Rui; Wang, Weiying; Yan, Daohao

    2016-02-01

    The amount of organic phosphorus (OP) and its distribution among different pools in lake sediments depends on biotic and abiotic processes driving the OP fractionation. Key environmental factors governing these transformations processes between OP fractionations in sediments were studied on the basis of geochemical characteristics of OP pools in relation to environmental factors in the sediments. The results illustrate that the factors influencing the accumulation or depletion of different OP pools were intrinsically dependent on the composition of the deposited organic matter (OM). During the mineralization of the OM the microorganisms excrete the enzyme alkaline phosphatase, accelerating the OP hydrolysis, and thereby setting the grounds for the bacterially-mediated oxidation of OM. There are two main degradation products of the labile OP pool (LOP) and the moderately labile OP pool (MLOP): Either the OP is transformed to a dissolved organic or inorganic P form, and thereby released to water column, or OP is transformed to a non-labile OP pool and stored in the sediments. A comparative study showed that oxy-hydroxides of iron (Fe) and aluminum (Al) only played an important role in influencing OP fractionation in Lake Wuliangsuhai, while the complexation reactions of OP with calcium ions and sorption to its minerals are key factors governing the OP fractionation in the two alkaline lakes. It is worth noting that a significant correlation between the Fe-P pool and the pools of LOP and MLOP indicates that the degradation of the rather labile OP pools are highly dependent on the iron redox reaction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. DOES EU-INTEGRATION CHANGE AGGLOMERATION PROCESS? THE IMPACT OF EU MEMBERSHIP PROCESS ON THE CITY-SIZE DISTRIBUTION OF TURKEY

    Directory of Open Access Journals (Sweden)

    Engin Sorhun

    2013-02-01

    Full Text Available This paper aims to reveal the eventual impacts of European Union (EU membership process and other conventional factors on the city-size distribution of a candidate country (Turkey. I can state main results as follows: Analyzing from different estimation methods the direct effect of the EU reforms on agglomerating forces rather than congesting forces are revealed to be dominant for Turkey. However, the main impact of the EU membership process has positive but modest coefficient that indicate the weak willingness of the country for EU reforms.Keywords: Economic itegration, agglomeration, city-size distribution, EU, Turkey.JEL Classification: F15, F22, R12, R23

  15. Processing and Characterization of a Novel Distributed Strain Sensor Using Carbon Nanotube-Based Nonwoven Composites.

    Science.gov (United States)

    Dai, Hongbo; Thostenson, Erik T; Schumacher, Thomas

    2015-07-21

    This paper describes the development of an innovative carbon nanotube-based non-woven composite sensor that can be tailored for strain sensing properties and potentially offers a reliable and cost-effective sensing option for structural health monitoring (SHM). This novel strain sensor is fabricated using a readily scalable process of coating Carbon nanotubes (CNT) onto a nonwoven carrier fabric to form an electrically-isotropic conductive network. Epoxy is then infused into the CNT-modified fabric to form a free-standing nanocomposite strain sensor. By measuring the changes in the electrical properties of the sensing composite the deformation can be measured in real-time. The sensors are repeatable and linear up to 0.4% strain. Highest elastic strain gage factors of 1.9 and 4.0 have been achieved in the longitudinal and transverse direction, respectively. Although the longitudinal gage factor of the newly formed nanocomposite sensor is close to some metallic foil strain gages, the proposed sensing methodology offers spatial coverage, manufacturing customizability, distributed sensing capability as well as transverse sensitivity.

  16. Processing and Characterization of a Novel Distributed Strain Sensor Using Carbon Nanotube-Based Nonwoven Composites

    Directory of Open Access Journals (Sweden)

    Hongbo Dai

    2015-07-01

    Full Text Available This paper describes the development of an innovative carbon nanotube-based non-woven composite sensor that can be tailored for strain sensing properties and potentially offers a reliable and cost-effective sensing option for structural health monitoring (SHM. This novel strain sensor is fabricated using a readily scalable process of coating Carbon nanotubes (CNT onto a nonwoven carrier fabric to form an electrically-isotropic conductive network. Epoxy is then infused into the CNT-modified fabric to form a free-standing nanocomposite strain sensor. By measuring the changes in the electrical properties of the sensing composite the deformation can be measured in real-time. The sensors are repeatable and linear up to 0.4% strain. Highest elastic strain gage factors of 1.9 and 4.0 have been achieved in the longitudinal and transverse direction, respectively. Although the longitudinal gage factor of the newly formed nanocomposite sensor is close to some metallic foil strain gages, the proposed sensing methodology offers spatial coverage, manufacturing customizability, distributed sensing capability as well as transverse sensitivity.

  17. Reexamination of fission fragment angular distributions and the fission process: Formalism

    International Nuclear Information System (INIS)

    Bond, P.D.

    1985-01-01

    The theory of fission fragment angular distributions is examined and the universally used expression is found to be valid only under restrictive assumptions. A more general angular distribution formula is derived and applied to recent data of high spin systems. At the same time it is shown that the strong anisotropies observed from such systems can be understood without changing the essential basis of standard fission theory. The effects of reaction mechanisms other than complete fusion on fission fragment angular distributions are discussed and possible angular distribution signatures of noncompound nucleus formation are mentioned

  18. Estimation of the processes controlling variability in phytoplankton pigment distributions on the southeastern U.S. continental shelf

    Science.gov (United States)

    Mcclain, Charles R.; Ishizaka, Joji; Hofmann, Eileen E.

    1990-01-01

    Five coastal-zone-color-scanner images from the southeastern U.S. continental shelf are combined with concurrent moored current meter measurements to assess the processes controlling the variability in chlorophyll concentration and distribution in this region. An equation governing the space and time distribution of a nonconservative quantity such as chlorophyll is used in the calculations. The terms of the equation, estimated from observations, show that advective, diffusive, and local processes contribute to the plankton distributions and vary with time and location. The results from this calculation are compared with similar results obtained using a numerical physical-biological model with circulation fields derived from an optimal interpolation of the current meter observations and it is concluded that the two approaches produce different estimates of the processes controlling phytoplankton variability.

  19. Fast analysis of molecular dynamics trajectories with graphics processing units-Radial distribution function histogramming

    International Nuclear Information System (INIS)

    Levine, Benjamin G.; Stone, John E.; Kohlmeyer, Axel

    2011-01-01

    The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.

  20. A community dataspace for distribution and processing of "long tail" high resolution topography data

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Arrowsmith, R.

    2016-12-01

    Topography is a fundamental observable for Earth and environmental science and engineering. High resolution topography (HRT) is revolutionary for Earth science. Cyberinfrastructure that enables users to discover, manage, share, and process these data increases the impact of investments in data collection and catalyzes scientific discovery.National Science Foundation funded OpenTopography (OT, www.opentopography.org) employs cyberinfrastructure that includes large-scale data management, high-performance computing, and service-oriented architectures, providing researchers with efficient online access to large, HRT (mostly lidar) datasets, metadata, and processing tools. HRT data are collected from satellite, airborne, and terrestrial platforms at increasingly finer resolutions, greater accuracy, and shorter repeat times. There has been a steady increase in OT data holdings due to partnerships and collaborations with various organizations with the academic NSF domain and beyond.With the decreasing costs of HRT data collection, via methods such as Structure from Motion, the number of researchers collecting these data is increasing. Researchers collecting these "long- tail" topography data (of modest size but great value) face an impediment, especially with costs associated in making them widely discoverable, shared, annotated, cited, managed and archived. Also because there are no existing central repositories or services to support storage and curation of these datasets, much of it is isolated and difficult to locate and preserve. To overcome these barriers and provide efficient centralized access to these high impact datasets, OT is developing a "Community DataSpace", a service built on a low cost storage cloud, (e.g. AWS S3) to make it easy for researchers to upload, curate, annotate and distribute their datasets. The system's ingestion workflow will extract metadata from data uploaded; validate it; assign a digital object identifier (DOI); and create a searchable

  1. A Weibull distribution with power-law tails that describes the first passage time processes of foreign currency exchanges

    Science.gov (United States)

    Sazuka, Naoya; Inoue, Jun-Ichi

    2007-03-01

    A Weibull distribution with power-law tails is confirmed as a good candidate to describe the first passage time process of foreign currency exchange rates. The Lorentz curve and the corresponding Gini coefficient for a Weibull distribution are derived analytically. We show that the coefficient is in good agreement with the same quantity calculated from the empirical data. We also calculate the average waiting time which is an important measure to estimate the time for customers to wait until the next price change after they login to their computer systems. By assuming that the first passage time distribution might change its shape from the Weibull to the power-law at some critical time, we evaluate the averaged waiting time by means of the renewal-reward theorem. We find that our correction of tails of the distribution makes the averaged waiting time much closer to the value obtained from empirical data analysis. We also discuss the deviation from the estimated average waiting time by deriving the waiting time distribution directly. These results make us conclude that the first passage process of the foreign currency exchange rates is well described by a Weibull distribution with power-law tails.

  2. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    Directory of Open Access Journals (Sweden)

    C. Kelleher

    2017-07-01

    Full Text Available Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow. In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology–soil–vegetation model (DHSVM. As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  3. Energy Conservation in Mobile Devices and Applications: A Case for Context Parsing, Processing and Distribution in Clouds

    Directory of Open Access Journals (Sweden)

    Saad Liaquat Kiani

    2013-01-01

    Full Text Available Context information consumed and produced by the applications on mobile devices needs to be represented, disseminated, processed and consumed by numerous components in a context-aware system. Significant amounts of context consumption, production and processing takes place on mobile devices and there is limited or no support for collaborative modelling, persistence and processing between device-Cloud ecosystems. In this paper we propose an environment for context processing in a Cloud-based distributed infrastructure that offloads complex context processing from the applications on mobile devices. An experimental analysis of complexity based context-processing categories has been carried out to establish the processing-load boundary. The results demonstrate that the proposed collaborative infrastructure provides significant performance and energy conservation benefits for mobile devices and applications.

  4. A new technique in the theory of angular distributions in atomic processes: the angular distribution of photoelectrons in single and double photoionization

    Energy Technology Data Exchange (ETDEWEB)

    Manakov, N.L.; Meremianin, A.V. [Voronezhskij Gosudarstvennyj Univ., Voronezh (Russian Federation); Marmo, S.I. [Voronezhskij Gosudarstvennyj Univ., Voronezh (Russian Federation)]|[Palermo Univ. (Italy)

    1996-07-14

    Special reduction formulae for bipolar harmonics with higher ranks of internal spherical functions are derived, which will be useful in problems involving multiple expansions in spherical functions. Together with irreducible tensor operator techniques these results provide a new and effective approach, which enables one to extract the geometrical and dynamical factors from the cross sections of atomic processes with polarized particles with an accurate account of all the polarization effects. The angular distribution of polarized electrons and the circular dichroism in photoionization of polarized atoms with an arbitrary angular momentum J{sub O} are presented in an invariant vector form. A specific circular dichroism, which is caused by the correlation of electron and atom orientations, is discussed. The angular distribution of escaping electrons in double photoionization of unpolarized atom is presented in a simple form. A convenient parametrization is proposed for describing the dependence of the photoprocess cross sections on the polarization state of the photon beam. (Author).

  5. Evaluation of processing factors for selected organic contaminants during virgin olive oil production: Distribution of BTEXS during olives processing.

    Science.gov (United States)

    López-Blanco, Rafael; Gilbert-López, Bienvenida; Rojas-Jiménez, Rubén; Robles-Molina, José; Ramos-Martos, Natividad; García-Reyes, Juan F; Molina-Díaz, Antonio

    2016-05-15

    The presence of BTEXS (benzene, toluene, ethylbenzene, xylenes and styrene) in virgin olive oils can be attributed to environmental contamination, but also to biological processes during oil lipogenesis (styrene). In this work, the processing factor of BTEXS from olives to olive oil during its production was evaluated at lab-scale with an Abencor system. Benzene showed the lowest processing factor (15%), whereas toluene and xylenes showed an intermediate behavior (with 40-60% efficiency), and ethylbenzene and styrene were completely transferred (100%). In addition, an attempt to examine the contribution of potential sources to olives contamination with BTEXS was carried out for the first time. Two types of olives samples were classified according to their proximity to the contamination source (road). Although higher levels of BTEXS were found in samples close to roads, the concentrations were relatively low and do not constitute a major contribution to BTEXS usually detected in olive oil. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Peach: a simple Perl-based system for distributed computation and its application to cryo-EM data processing.

    Science.gov (United States)

    Leong, Peter A; Heymann, J Bernard; Jensen, Grant J

    2005-04-01

    A simple distributed processing system named "Peach" was developed to meet the rising computational demands of modern structural biology (and other) laboratories without additional expense by using existing hardware resources more efficiently. A central server distributes jobs to idle workstations in such a way that each computer is used maximally, but without disturbing intermittent interactive users. As compared to other distributed systems, Peach is simple, easy to install, easy to administer, easy to use, scalable, and robust. While it was designed to queue and distribute large numbers of small tasks to participating computers, it can also be used to send single jobs automatically to the fastest currently available computer and/or survey the activity of an entire laboratory's computers. Tests of robustness and scalability are reported, as are three specific electron cryomicroscopy applications where Peach enabled projects that would not otherwise have been feasible without an expensive, dedicated cluster.

  7. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  8. The Influence of Volcanological and Sedimentological Processes on Diamond Grade Distribution: Examples From the Ekati Diamond Mine, NWT, Canada

    Science.gov (United States)

    Porritt, L. A.; Cas, R. A.; Ailleres, L.; Oshust, P.

    2009-05-01

    The study of the diamond distribution within two kimberlite pipes, Fox and Koala, from the Ekati Diamond Mine, NWT, Canada, in conjunction with detailed facies models has shown several distinct relationships of deposit type and grade distribution. In both pipes the lithological facies represent grade units which can be distinguished from each other in terms of relative size and abundance of diamonds. Positive correlation of olivine grain size and abundance with diamond grade is seen, indicating that density sorting of fragmental kimberlites occurs both in pyroclastic and resedimented deposits. Though surface geological processes do not control the diamond potential of the erupting magma, they can be responsible for concentrating diamonds into economically significant proportions. A good understanding of the eruption, transport and depositional processes responsible for the individual lithological units and the diamond distribution within them is important for successful resource estimation and may lead to recognition of areas suitable for selective mining, making a marginal deposit economic.

  9. Intelligent Monitoring System With High Temperature Distributed Fiberoptic Sensor For Power Plant Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Kwang Y. Lee; Stuart S. Yin; Andre Boheman

    2005-12-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatial resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, we set up a dedicated high power, ultrafast laser system for fabricating in-fiber gratings in harsh environment optical fibers, successfully fabricated gratings in single crystal sapphire fibers by the high power laser system, and developed highly sensitive long period gratings (lpg) by electric arc. Under Task 2, relevant mathematical modeling studies of NOx formation in practical combustors. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we investigate a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. The 3D temperature data is furnished by the Penn State Energy Institute using FLUENT. Given a set of empirical data with no analytic expression, we first develop an analytic description and then extend that model along a single axis. Extrapolation

  10. Stochastic processes in the social sciences: Markets, prices and wealth distributions

    Science.gov (United States)

    Romero, Natalia E.

    The present work uses statistical mechanics tools to investigate the dynamics of markets, prices, trades and wealth distribution. We studied the evolution of market dynamics in different stages of historical development by analyzing commodity prices from two distinct periods ancient Babylon, and medieval and early modern England. We find that the first-digit distributions of both Babylon and England commodity prices follow Benfords law, indicating that the data represent empirical observations typically arising from a free market. Further, we find that the normalized prices of both Babylon and England agricultural commodities are characterized by stretched exponential distributions, and exhibit persistent correlations of a power law type over long periods of up to several centuries, in contrast to contemporary markets. Our findings suggest that similar market interactions may underlie the dynamics of ancient agricultural commodity prices, and that these interactions may remain stable across centuries. To further investigate the dynamics of markets we present the analogy between transfers of money between individuals and the transfer of energy through particle collisions by means of the kinetic theory of gases. We introduce a theoretical framework of how the micro rules of trading lead to the emergence of income and wealth distribution. Particularly, we study the effects of different types of distribution of savings/investments among individuals in a society and different welfare/subsidies redistribution policies. Results show that while considering savings propensities the models approach empirical distributions of wealth quite well the effect of redistribution better captures specific features of the distributions which earlier models failed to do; moreover the models still preserve the exponential decay observed in empirical income distributions reported by tax data and surveys.

  11. Intelligent Monitoring System with High Temperature Distributed Fiberoptic Sensor for Power Plant Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Kwang Y. Lee; Stuart S. Yin; Andre Boehman

    2006-09-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatial resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, we have set up a dedicated high power, ultrafast laser system for fabricating in-fiber gratings in harsh environment optical fibers, successfully fabricated gratings in single crystal sapphire fibers by the high power laser system, and developed highly sensitive long period gratings (lpg) by electric arc. Under Task 2, relevant mathematical modeling studies of NOx formation in practical combustors have been completed. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we have investigated a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. Given a set of empirical data with no analytic expression, we first developed an analytic description and then extended that model along a single axis.

  12. Combining Generalized Renewal Processes with Non-Extensive Entropy-Based q-Distributions for Reliability Applications

    Directory of Open Access Journals (Sweden)

    Isis Didier Lins

    2018-03-01

    Full Text Available The Generalized Renewal Process (GRP is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.

  13. Influence of core box vents distribution on flow dynamics of core shooting process based on experiment and numerical simulation

    Directory of Open Access Journals (Sweden)

    Chang-jiang Ni

    2016-01-01

    Full Text Available Core shooting process plays a decisive role in the quality of sand cores, and core box vents distribution is one of the most important factor determining the effectiveness of core shooting process. In this paper, the influence of core box vents distribution on the flow dynamics of core shooting process was investigated based on in situ experimental observations with transparent core box, high-speed camera and pressure measuring system. Attention was focused on the variation of both the flow behavior of sand and pressure curves due to different vents distribution. Taking both kinetic and frictional stress into account, a kinetic-frictional constitutive model was established to describe the internal momentum transfer in the solid phase. Two-fluid model (TFM simulation was then performed and good agreement was achieved between the experimental and simulated results on both the flow behavior of sand and the pressure curves. It was found that vents distribution has direct effect on the pressure difference of different locations in the core box, which determines the buoyancy force exerting on the sand particles and significantly influences the filling process of core sand.

  14. Processing statistics: an examination of focused and distributed attention using event related potentials.

    Science.gov (United States)

    Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan

    2013-06-07

    Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. ProcessFast, a Java Framework for Development of Concurrent and Distributed Applications

    OpenAIRE

    Esuli, Andrea; Fagni, Tiziano

    2015-01-01

    Today, any application that requires processing information gathered from the Web will likely require a parallel processing approach to be able to scale. While writing such applications, the developer should be able to exploit several types of parallelism paradigms in a natural way. Most of the available development tools are focused on just one of these parallelism types, e.g. the data parallelism, stream processing, etc. In this paper, we introduce ProcessFast, a Java framework for the deve...

  16. On constitutive modelling and information for phenomenal distributed parameter control of multicomponent chemical processes in fluid- and solidphase

    International Nuclear Information System (INIS)

    Niemiec, W.

    1985-01-01

    The problem under consideration is to find common physicochemical conditions of kinetics and phenomena of multicomponent chemical processes in fluid- and solidphase, deciding yield and quality of final products of these processes. The paper is devoted to the construction of a fundamental distributed parameter constitutive theory of physicochemical modelling of these chemical processes treated from the view of isotropic and anisotropic nonhomogeneous media with space and time memories. On the basis of definition of derivative and constitutive equations of continuity, original system of partial differential constitutive state equations are deduced

  17. The concentration distribution around a growing gas bubble in a bio tissue under the effect of suction process.

    Science.gov (United States)

    Mohammadein, S A

    2014-07-01

    The concentration distribution around a growing nitrogen gas bubble in the blood and other bio tissues of divers who ascend to surface too quickly is obtained by Mohammadein and Mohamed model (2010) for variant and constant ambient pressure through the decompression process. In this paper, the growing of gas bubbles and concentration distribution under the effect of suction process are studied as a modification of Mohammadein and Mohamed model (zero suction). The growth of gas bubble is affected by ascent rate, tissue diffusivity, initial concentration difference, surface tension and void fraction. Mohammadein and Mohamed model (2010) is obtained as a special case from the present model. Results showed that, the suction process activates the systemic blood circulation and delay the growth of gas bubbles in the bio tissues to avoid the incidence of decompression sickness (DCS). Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A data and information system for processing, archival, and distribution of data for global change research

    Science.gov (United States)

    Graves, Sara J.

    1994-01-01

    Work on this project was focused on information management techniques for Marshall Space Flight Center's EOSDIS Version 0 Distributed Active Archive Center (DAAC). The centerpiece of this effort has been participation in EOSDIS catalog interoperability research, the result of which is a distributed Information Management System (IMS) allowing the user to query the inventories of all the DAAC's from a single user interface. UAH has provided the MSFC DAAC database server for the distributed IMS, and has contributed to definition and development of the browse image display capabilities in the system's user interface. Another important area of research has been in generating value-based metadata through data mining. In addition, information management applications for local inventory and archive management, and for tracking data orders were provided.

  19. Raw materials exploitation in Prehistory of Georgia: sourcing, processing and distribution

    Science.gov (United States)

    Tushabramishvili, Nikoloz; Oqrostsvaridze, Avthandil

    2016-04-01

    Study of raw materials has a big importance to understand the ecology, cognition, behavior, technology, culture of the Paleolithic human populations. Unfortunately, explorations of the sourcing, processing and distribution of stone raw materials had a less attention until the present days. The reasons of that were: incomplete knowledge of the archaeologists who are doing the late period archaeology (Bronze Age-Medieval) and who are little bit far from the Paleolithic technology and typology; Ignorance of the stone artifacts made on different kind of raw-materials, except flint and obsidians. Studies on the origin of the stone raw materials are becoming increasingly important since in our days. Interesting picture and situation have been detected on the different sites and in different regions of Georgia. In earlier stages of Middle Paleolithic of Djruchula Basin caves the number of basalt, andesite, argillite etc. raw materials are quite big. Since 130 000 a percent of the flint raw-material is increasing dramatically. Flint is an almost lonely dominated raw-material in Western Georgia during thousand years. Since approximately 50 000 ago the first obsidians brought from the South Georgia, appeared in Western Georgia. Similar situation has been detected by us in Eastern Georgia during our excavations of Ziari and Pkhoveli open-air sites. The early Lower Paleolithic layers are extremely rich by limestone artifacts while the flint raw-materials are dominated in the Middle Paleolithic layers. Study of these issues is possible to achieve across chronologies, the origins of the sources of raw-materials, the sites and regions. By merging archaeology with anthropology, geology and geography we are able to acquire outstanding insights about those populations. New approach to the Paleolithic stone materials, newly found Paleolithic quarries gave us an opportunities to try to achieve some results for understanding of the behavior of Paleolithic populations, geology and

  20. Geo-processing and distributed data base; Geoprocessamento e banco de dados distribuido

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, Vitor Paulo da; Suguimoto, Jorge Katsumi [Centro de Excelencia em Distribuicao de Energia Eletrica, Sao Paulo, SP (Brazil)

    1994-12-31

    The information technology has been greatly developed lately. However, in order to obtain the best possible results, which are inter-dependable it is necessary to plan the integration. This work presents the requirements for the adoption of a integrated system of artificial intelligence, giving special emphasis to a data base of distributed data 3 refs., 2 figs.

  1. Determining particle size distributions from video images by use of image processing

    NARCIS (Netherlands)

    De Graaff, J.; Slot, R.E.

    1993-01-01

    Recently a lot of research is being done on cohesive sediment. It plays a major role in the shoaling of harbours and waterways, and in some serious environmental problems. To predict cohesive sediment transport, information is needed about the distributions of size and settling velocities. Many

  2. Bootstrap confidence intervals for the process capability index under half-logistic distribution

    OpenAIRE

    Wararit Panichkitkosolkul

    2012-01-01

    This study concerns the construction of bootstrap confidence intervals for theprocess capability index in the case of half-logistic distribution. The bootstrap confidence intervals applied consist of standard bootstrap confidence interval, percentile bootstrap confidence interval and bias-corrected percentile bootstrap confidence interval. Using Monte Carlo simulations, the estimated coverage probabilities and average widths ofbootstrap confidence intervals are compared, with results showing ...

  3. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain functio...

  4. s-process studies in the light of new experimental cross sections: Distribution of neutron fluences and r-process residuals

    International Nuclear Information System (INIS)

    Kaeppeler, F.; Beer, H.; Wisshak, K.; Clayton, D.D.; Macklin, R.L.; Ward, R.A.

    1981-08-01

    A best set of neutron-capture cross sections has been evaluated for the most important s-process isotopes. With this data base, s-process studies have been carried out using the traditional model which assumes a steady neutron flux and an exponential distribution of neutron irradiations. The calculated sigmaN-curve is in excellent agreement with the empirical sigmaN-values of pure s-process nuclei. Simultaneously, good agreement is found between the difference of solar and s-process abundances and the abundances of pure r-process nuclei. We also discuss the abundance pattern of the iron group elements where our s-process results complement the abundances obtained from explosive nuclear burning. The results obtained from the traditional s-process model such as seed abundances, mean neutron irradiations, or neutron densities are compared to recent stellar model calculations which assume the He-burning shells of red giant stars as the site for the s-process. (orig.) [de

  5. CubeSat Cloud: A framework for distributed storage, processing and communication of remote sensing data on cubesat clusters

    Science.gov (United States)

    Challa, Obulapathi Nayudu

    CubeSat Cloud is a novel vision for a space based remote sensing network that includes a collection of small satellites (including CubeSats), ground stations, and a server, where a CubeSat is a miniaturized satellite with a volume of a 10x10x10 cm cube and has a weight of approximately 1 kg. The small form factor of CubeSats limits the processing and communication capabilities. Implemented and deployed CubeSats have demonstrated about 1 GHz processing speed and 9.6 kbps communication speed. A CubeSat in its current state can take hours to process a 100 MB image and more than a day to downlink the same, which prohibits remote sensing, considering the limitations in ground station access time for a CubeSat. This dissertation designs an architecture and supporting networking protocols to create CubeSat Cloud, a distributed processing, storage and communication framework that will enable faster execution of remote sensing missions on CubeSat clusters. The core components of CubeSat Cloud are CubeSat Distributed File System, CubeSat MapMerge, and CubeSat Torrent. The CubeSat Distributed File System has been created for distributing of large amounts of data among the satellites in the cluster. Once the data is distributed, CubeSat MapReduce has been created to process the data in parallel, thereby reducing the processing load for each CubeSat. Finally, CubeSat Torrent has been created to downlink the data at each CubeSat to a distributed set of ground stations, enabling faster asynchronous downloads. Ground stations send the downlinked data to the server to reconstruct the original image and store it for later retrieval. Analysis of the proposed CubeSat Cloud architecture was performed using a custom-designed simulator, called CubeNet and an emulation test bed using Raspberry Pi devices. Results show that for cluster sizes ranging from 5 to 25 small satellites, faster download speeds up to 4 to 22 times faster - can be achieved when using CubeSat Cloud, compared to a

  6. Measuring the Dependence between Two Point Processes through Confidence Intervals for the Second Order Distribution.

    Science.gov (United States)

    1987-09-01

    in neurophysiology, which are very briefly described as follows (for further details see e.g. Bryant, Ruiz Marcos , and Segundo, 1973). Two neurons, A...Poisson process, and NB being an equilibrium rcnewal process on (-oo, oo) (for a definition and a construction see pp. 517-519 of Karlin and Taylor, 1975...order intensities of a bivariate stationary point process. J. Roy. Statist. Soc. Ser. B 38, 60-66. Bryant, H. L., Ruiz Marcos , A., and Segundo, J. P

  7. An Early Warning and Monitoring System for Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Bach, K.R.

    This thesis describes an approach based on early warning and monitoring of system metrics and task deadlines in a distributed real-time system. The main reason for applying this, is the fact that while a system used for critical purposes can be checked formally pre-runtime for validity, there are......This thesis describes an approach based on early warning and monitoring of system metrics and task deadlines in a distributed real-time system. The main reason for applying this, is the fact that while a system used for critical purposes can be checked formally pre-runtime for validity...... overall (operating) system performance. We propose a scheme by which it is possible to monitor specific system metrics and task deadlines in order to detect abnormalities contained herein as early as possible at which time various "recovery" actions may be applied to correct the situation....

  8. Processes determining the marine alkalinity and calcium carbonate saturation state distributions

    OpenAIRE

    Carter, B. R.; Toggweiler, J. R.; Key, R. M.; Sarmiento, J. L.

    2014-01-01

    We introduce a composite tracer for the marine system, Alk*, that has a global distribution primarily determined by CaCO3 precipitation and dissolution. Alk* is also affected by riverine alkalinity from dissolved terrestrial carbonate minerals. We estimate that the Arctic receives approximately twice the riverine alkalinity per unit area as the Atlantic, and 8 times that of the other oceans. Riverine inputs broadly elevate Alk* in the Arctic surface and particularly near riv...

  9. Picoseconds pulse generation and pulse width determination processes of a distributed feedback dye laser

    International Nuclear Information System (INIS)

    Abdul Ghani, B.; Hammadi, M.

    2004-08-01

    A mathematical model has been developed to describe the dynamic emission of Nd-glass, distributed feedback dye laser (DFDL), and periodical grating temperature. The suggested model allows the investigation of the time behavior of Nd-glass laser and DFDL pulsed. Moreover, it allows studying the effect of the laser input parameters of Nd-glass laser on the spectral characteristics of the output DFDL pulses such as pulse width, delay time, and time separation

  10. Distribution Associated with Stochastic Processes of Gene Expression in a Single Eukaryotic Cell

    Directory of Open Access Journals (Sweden)

    Kuznetsov Vladimir A

    2001-01-01

    Full Text Available The ability to simultaneously measure mRNA abundance for large number of genes has revolutionized biological research by allowing statistical analysis of global gene-expression data. Large-scale gene-expression data sets have been analyzed in order to identify the probability distributions of gene expression levels (or transcript copy numbers in eukaryotic cells. Determining such function(s may provide a theoretical basis for accurately counting all expressed genes in a given cell and for understanding gene expression control. Using the gene-expression libraries derived from yeast cells and from different human cell tissues we found that all observed gene expression levels data appear to follow a Pareto-like skewed frequency distribution. We produced a the skewed probability function, called the Binomial Differential distribution, that accounts for many rarely transcribed genes in a single cell. We also developed a novel method for estimating and removing major experimental errors and redundancies from the Serial Analysis Gene Expression (SAGE data sets. We successfully applied this method to the yeast transcriptome. A "basal" random transcription mechanism for all protein-coding genes in every eukaryotic cell type is predicted.

  11. Effects of electric field and charge distribution on nanoelectronic processes involving conducting polymers

    International Nuclear Information System (INIS)

    Ramos, Marta M.D.; Correia, Helena M.G.

    2006-01-01

    The injection of charge carriers in conducting polymer layers gives rise to local electric fields which should have serious implications on the charge transport through the polymer layer. The charge distribution and the related electric field inside the ensemble of polymer molecules, with different molecular arrangements at nanoscale, determine whether or not intra-molecular charge transport takes place and the preferential direction for charge hopping between neighbouring molecules. Consequently, these factors play a significant role in the competition between current flow, charge trapping and recombination in polymer-based electronic devices. By suitable Monte Carlo calculations, we simulated the continuous injection of electrons and holes into polymer layers with different microstructures and followed their transport through those polymer networks. Results of these simulations provided a detailed picture of charge and electric field distribution in the polymer layer and allowed us to assess the consequences for current transport and recombination efficiency as well as the distribution of recombination events within the polymer film. In the steady state we found an accumulation of electrons and holes near the collecting electrodes giving rise to an internal electric field which is greater than the external applied field close to the electrodes and lower than the one in the central region of the polymer layer. We also found that a strong variation of electric field inside the polymer layer leads to an increase of recombination events in regions inside the polymer layer where the values of the internal electric field are lower

  12. Seasonal and spatial evolution of trihalomethanes in a drinking water distribution system according to the treatment process.

    Science.gov (United States)

    Domínguez-Tello, A; Arias-Borrego, A; García-Barrera, Tamara; Gómez-Ariza, J L

    2015-11-01

    This paper comparatively shows the influence of four water treatment processes on the formation of trihalomethanes (THMs) in a water distribution system. The study was performed from February 2005 to January 2012 with analytical data of 600 samples taken in Aljaraque water treatment plant (WTP) and 16 locations along the water distribution system (WDS) in the region of Andévalo and the coast of Huelva (southwest Spain), a region with significant seasonal and population changes. The comparison of results in the four different processes studied indicated a clear link of the treatment process with the formation of THM along the WDS. The most effective treatment process is preozonation and activated carbon filtration (P3), which is also the most stable under summer temperatures. Experiments also show low levels of THMs with the conventional process of preoxidation with potassium permanganate (P4), delaying the chlorination to the end of the WTP; however, this simple and economical treatment process is less effective and less stable than P3. In this study, strong seasonal variations were obtained (increase of THM from winter to summer of 1.17 to 1.85 times) and a strong spatial variation (1.1 to 1.7 times from WTP to end points of WDS) which largely depends on the treatment process applied. There was also a strong correlation between THM levels and water temperature, contact time and pH. On the other hand, it was found that THM formation is not proportional to the applied chlorine dose in the treatment process, but there is a direct relationship with the accumulated dose of chlorine. Finally, predictive models based on multiple linear regressions are proposed for each treatment process.

  13. A framework for analysis of abortive colony size distributions using a model of branching processes in irradiated normal human fibroblasts.

    Science.gov (United States)

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Ouchi, Noriyuki B; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2013-01-01

    Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.

  14. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    the various models underlaying every formal method by declaring the design assumptions as a number of features or constraints, stated in the formal specification of system requirements, to be evaluated at run-time. It is assumed that if these constraints are ful-filled at run-time then it is fair to have...... that the final system design is in accordance with any requirement from the requirement specification. When this is the case, the requirement is transformed by means of a class constraint extractor to a set of constraints. These are then to be evaluated at run-time. This thesis is devoted to temporal behaviour......This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...

  15. Fuel cell plates with skewed process channels for uniform distribution of stack compression load

    Science.gov (United States)

    Granata, Jr., Samuel J.; Woodle, Boyd M.

    1989-01-01

    An electrochemical fuel cell includes an anode electrode, a cathode electrode, an electrolyte matrix sandwiched between electrodes, and a pair of plates above and below the electrodes. The plate above the electrodes has a lower surface with a first group of process gas flow channels formed thereon and the plate below the electrodes has an upper surface with a second group of process gas flow channels formed thereon. The channels of each group extend generally parallel to one another. The improvement comprises the process gas flow channels on the lower surface of the plate above the anode electrode and the process gas flow channels on the upper surface of the plate below the cathode electrode being skewed in opposite directions such that contact areas of the surfaces of the plates through the electrodes are formed in crisscross arrangements. Also, the plates have at least one groove in areas of the surfaces thereof where the channels are absent for holding process gas and increasing electrochemical activity of the fuel cell. The groove in each plate surface intersects with the process channels therein. Also, the opposite surfaces of a bipolar plate for a fuel cell contain first and second arrangements of process gas flow channels in the respective surfaces which are skewed the same amount in opposite directions relative to the longitudinal centerline of the plate.

  16. Proceedings of the Annual ACM Symposium (11th) on Principles of Distributed Computing Held in Vancouver, British Columbia, Canada on 10-12 Aug 1992

    Science.gov (United States)

    1992-08-10

    that uses lcg m registers. tation of a counter in order to count the wakeupprtioesses. contere is onder soution how the see- In order to prove the...undetectably corrupted and that each local garbage collector, and in interpreting the live mes- timestamp generator produces increasing values, sages

  17. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    Science.gov (United States)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images

  18. PROCESSING, CATALOGUING AND DISTRIBUTION OF UAS IMAGES IN NEAR REAL TIME

    Directory of Open Access Journals (Sweden)

    I. Runkel

    2013-08-01

    Full Text Available Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications – where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security

  19. A Study on the Processes of Distribution, Accumulation and Transfer of Copper (Cu in the Organisms of Fishes

    Directory of Open Access Journals (Sweden)

    Iliana G. Velcheva

    2009-07-01

    Full Text Available By applying mathematical approaches we studied the distribution of copper in organs and tissues of Alburnus alburnus and Perca fluviatilis from "Topolnitsa” Dam Lake. A deposit of the metal in the kidneys and liver was recorded. We found out that the surveyed species are macroconcentrators of cadmium and zinc and there is a process biomagnification of heavy metals in the trophic levels.

  20. Amino Acid Profile, Group of Functional and Molecular Weight Distribution of Goat Skin Gelatin That Produced Through Acid Process

    OpenAIRE

    Muhammad Irfan Said; Suharjono Triatmojo; Yuny Erwanto; Achmad Fudholi

    2012-01-01

    Gelatin is a product of hydrolysis of collagen protein from animals that are partially processed.  Gelatin used in food and non food industries.  Gelatin is produced when many import of raw skins and bones of pigs and cows.  Goat skins potential as a raw material substitution that still doubt its halal. Process production of gelatin determine the properties of gelatin. The objectives of this research were to determine amino acid profile, group of functional and molecular weight distribution o...

  1. Origin of the cusp in the transverse momentum distribution for the process of strong-field ionization

    Science.gov (United States)

    Ivanov, I. A.

    2015-12-01

    We study the origin of the cusp structure in the transverse or lateral electron momentum distribution (TEMD) for the process of tunneling ionization driven by a linearly polarized laser pulse. We show that the appearance of the cusp in the TEMD can be explained as follows. Projection on the set of the Coulomb scattering states leads to the appearance of elementary cusps which have a simple structure as functions of the lateral momentum. This structure is independent of the detailed dynamics of the ionization process and can be described analytically. These elementary cusps can be used to describe the cusp structure in TEMD.

  2. Transformation and distribution processes governing the fate and behaviour of nanomaterials in the environment: an overview

    DEFF Research Database (Denmark)

    Hansen, Steffen Foss; Hartmann, Nanna B.; Baun, Anders

    2015-01-01

    -based approaches, relates to the interplay between the nanomaterial physico-chemical properties, surrounding media/matrix composition and the underlying processes that determine particle behaviour. Here we identify and summarize key processes governing the fate and behaviour of nanomaterials in the environment....... This is done through a critical review of the present state-of-knowledge. We describe the (photo)chemical, physical or biologically mediated transformation of manufactured nanomaterials due to degradation, aggregation, agglomeration, or through association with dissolved, colloidal or particulate matter...... present in the environment. Specific nanomaterials are used as case studies to illustrate these processes. Key environmental processes are identified and ranked and key knowledge gaps are identified, feeding into the longer-term goal of improving the existing models for predicted environmental...

  3. Optimization of Casting Process Parameters for Homogeneous Aggregate Distribution in Self-Compacting Concrete: A Feasibility Study

    DEFF Research Database (Denmark)

    Spangenberg, Jon; Tutum, Cem Celal; Hattel, Jesper Henri

    2011-01-01

    of the filling etc., however since this work is the initial feasibility study in this field, only three process parameters are considered. Despite the reduction in the number of process parameters, the complexity involved in the considered casting process results in a non trivial optimal design set.......The use of self-compacting concrete (SCC) as a construction material has been getting more attention from the industry. Its application area varies from standard structural elements in bridges and skyscrapers to modern architecture having geometrical challenges. However, heterogeneities induced...... during the casting process may lead to variations of local mechanical properties and hence to a potential decrease in load carrying capacity of the structure. This paper presents a methodology for optimization of SCC casting aiming at having a homogeneous aggregate distribution; a beam has been used...

  4. Optimization of Casting Process Parameters for Homogeneous Aggregate Distribution in Self-Compacting Concrete: A Feasibility Study

    DEFF Research Database (Denmark)

    Spangenberg, Jon; Tutum, Cem Celal; Hattel, Jesper Henri

    2011-01-01

    The use of self-compacting concrete (SCC) as a construction material has been getting more attention from the industry. Its application area varies from standard structural elements in bridges and skyscrapers to modern architecture having geometrical challenges. However, heterogeneities induced...... during the casting process may lead to variations of local mechanical properties and hence to a potential decrease in load carrying capacity of the structure. This paper presents a methodology for optimization of SCC casting aiming at having a homogeneous aggregate distribution; a beam has been used...... of the filling etc., however since this work is the initial feasibility study in this field, only three process parameters are considered. Despite the reduction in the number of process parameters, the complexity involved in the considered casting process results in a non trivial optimal design set....

  5. IOC-UNEP review meeting on oceanographic processes of transport and distribution of pollutants in the sea

    International Nuclear Information System (INIS)

    1991-01-01

    The IOC-UNEP Review Meeting on Oceanographic Processes of Transfer and Distribution of Pollutants in the Sea was opened at the Ruder Boskovic Institute, Zagreb, Yugoslavia on Monday, 15 May 1989. Papers presented at the meeting dealt with physical and geochemical processes in sea-water and sediment in transport mixing and dispersal of pollutants. The importance of mesoscale eddies and gyres in the open sea, wind-driven currents and upwelling events in the coastal zone, and thermohaline processes in semi-enclosed bays and estuaries was recognized. There is strong evidence that non-local forcing can drive circulation in the coastal area. Concentrations, horizontal and vertical distributions and transport of pollutants were investigated and presented for a number of coastal areas. Riverine and atmospheric inputs of different pollutants to the western Mediterranean were discussed. Reports on two on-going nationally/internationally co-ordinated projects (MEDMODEL, EROS 2000) were presented. Discussions during the meeting enabled an exchange of ideas between specialists in different disciplines to be made. It is expected that this will promote the future interdisciplinary approach in this field. The meeting recognized the importance of physical oceanographic studies in investigating the transfer and distribution of pollutants in the sea and in view of the importance of the interdisciplinary approach and bilateral and/or multilateral co-operation a number of recommendations were adopted

  6. Size distribution of aerosol particles produced during mining and processing uranium ore.

    Science.gov (United States)

    Mala, Helena; Tomasek, Ladislav; Rulik, Petr; Beckova, Vera; Hulka, Jiri

    2016-06-01

    The aerosol particle size distributions of uranium and its daughter products were studied and determined in the area of the Rožná mine, which is the last active uranium mine in the Czech Republic. A total of 13 samples were collected using cascade impactors from three sites that had the highest expected levels of dust, namely, the forefield, the end of the ore chute and an area close to workers at the crushing plant. The characteristics of most size distributions were very similar; they were moderately bimodal, with a boundary approximately 0.5 μm between the modes. The activity median aerodynamic diameter (AMAD) and geometric standard deviation (GSD) were obtained from the distributions beyond 0.39 μm, whereas the sizes of particles below 0.39 μm were not differentiated. Most AMAD and GSD values in the samples ranged between 3.5 and 10.5 μm and between 2.8 and 5.0, respectively. The geometric means of the AMADs and GSDs from all of the underground sampling sites were 4.2 μm and 4.4, respectively, and the geometric means of the AMADs and GSDs for the crushing plant samplings were 9.8 μm and 3.3, respectively. The weighted arithmetic mean of the AMADs was 4.9 μm, with a standard error of 0.7 μm, according to the numbers of workers at the workplaces. The activity proportion of the radon progeny to (226)Ra in the aerosol was 0.61. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Physical processes controlling the distribution of relative humidity in the tropical tropopause layer over the Pacific

    Science.gov (United States)

    Jensen, E. J.; Ueyama, R.; Pfister, L.; Bui, T. V.; Pittman, J. V.; Thornberry, T. D.; Rollins, A. W.; Hintsa, E. J.; Diskin, G. S.; DiGangi, J. P.; Woods, S.; Lawson, P.; Rosenlof, K. H.

    2016-12-01

    The distribution of relative humidity with respect to ice (RHI) in the Boreal wintertime Tropical Tropopause Layer (about 14-19 km) over the Pacific is examined with the extensive dataset of measurements from the NASA Airborne Tropical TRopopause EXperiment (ATTREX). Multiple deployments of the Global Hawk during ATTREX provided hundreds of vertical profiles spanning the Pacific with accurate measurements of temperature, pressure, water vapor concentration, ozone concentration, and cloud properties. We also compare the measured RHI distributions with results from a transport and microphysical model driven by meteorological analysis fields. Notable features in the distribution of RHI versus temperature and longitude include (1) the common occurrence of RHI values near ice saturation over the western Pacific in the lower TTL (temperatures greater than 200 K) and in airmasses with low ozone concentrations indicating recent detrainment from deep convection; (2) low RHI values in the lower TTL over the eastern Pacific where deep convection is infrequent; (3) RHI values following a constant H2O mixing ratio in the upper TTL (temperatures below about 195 K), particularly for samples with ozone mixing ratios greater than about 50-100 ppbv indicating mixtures of tropospheric and stratospheric air, and (4) RHI values typically near ice saturation in the coldest airmasses sampled (temperatures less than about 190 K). We find that the typically saturated air in the lower TTL over the western Pacific is largely driven by the frequent occurrence of deep convection in this region. The nearly-constant water vapor mixing ratios in the upper TTL result from the combination of slow ascent (resulting in long residence times) and wave-driven temperature variability on a range of time scales (resulting in most air parcels having experienced low temperature and dehydration).

  8. A distributed water level network in ephemeral river reaches to identify hydrological processes within anthropogenic catchments

    Science.gov (United States)

    Sarrazin, B.; Braud, I.; Lagouy, M.; Bailly, J. S.; Puech, C.; Ayroles, H.

    2009-04-01

    In order to study the impact of land use change on the water cycle, distributed hydrological models are more and more used, because they have the ability to take into account the land surface heterogeneity and its evolution due to anthropogenic pressure. These models provide continuous distributed simulations of streamflow, runoff, soil moisture, etc, which, ideally, should be evaluated against continuous distributed measurements, taken at various scales and located in nested sub-catchments. Distributed network of streamflow gauging stations are in general scarce and very expensive to maintain. Furthermore, they can hardly be installed in the upstream parts of the catchments where river beds are not well defined. In this paper, we present an alternative to these standard streamflow gauging stations network, based on self powered high resolution water level sensors using a capacitive water height data logger. One of their advantages is that they can be installed even in ephemeral reaches and from channel head locations to high order streams. Furthermore, these innovative and easily adaptable low cost sensors offer the possibility to develop in the near future, a wireless network application. Such a network, including 15 sensors has been set up on nested watersheds in small and intermittent streams of a 7 km² catchment, located in the mountainous "Mont du Lyonnais" area, close to the city of Lyon, France. The land use of this catchment is mostly pasture, crop and forest, but the catchment is significantly affected by human activities, through the existence of a dense roads and paths network and urbanized areas. The equipment provides water levels survey during precipitation events in the hydrological network with a very accurate time step (2 min). Water levels can be related to runoff production and catchment response as a function of scale. This response will depend, amongst other, on variable soil water storage capacity, physiographic data and characteristics of

  9. Calculation of distribution of temperature in three-dimensional solid changing its shape during the process

    Directory of Open Access Journals (Sweden)

    Bogusław Bożek

    2005-01-01

    Full Text Available The present paper suplements and continues [Bożek B., Filipek R., Holly K., Mączka C.: Distribution of temperature in three-dimensional solids. Opuscula Mathematica 20 (2000, 27-40]. Galerkin method for the Fourier–Kirchhoff equation in the case when \\(\\Omega(t\\ – equation domain, dependending on time \\(t\\, is constructed. For special case \\(\\Omega(t \\subset \\mathbb{R}^2\\ the computer program for above method is written. Binaries and sources of this program are available on http://wms.mat.agh.edu.pl/~bozek.

  10. Barrier distribution from 28Si+154Sm quasielastic scattering: Coupling effects in the fusion process

    Directory of Open Access Journals (Sweden)

    Kaur Gurpreet

    2016-01-01

    Full Text Available Barrier distribution for the 28Si+154Sm system has been extracted from large angle quasielastic scattering measurement to investigate the role of various channel couplings on fusion dynamics. The coupled channel calculations, including the collective excitation of the target and projectile, are observed to reproduce the experimental BD rather well. It seems that the role of neutron transfer, relative to collective excitation, is in fact weak in the 28Si+154Sm system even though it has positive Q-value for neutron transfer channels.

  11. Distributed Leadership in Organizational Change Processes: A Qualitative Study in Public Hospital Units

    DEFF Research Database (Denmark)

    Kjeldsen, Anne Mette; Jonasson, Charlotte; Ovesen, Maria

    2015-01-01

    This paper proposes that the emergence and boundaries of distributed leadership (DL) are developed in a dynamic interplay with planned as well as emergent organizational change. The empirical findings are based on a qualitative, longitudinal case study with interviews conducted at two different...... hospital units in the context of a larger hospital merger within the Danish health care system. The paper adds to prior studies arguing that DL contributes positively to planned organizational change by instead providing examples of how ongoing changes in contextual conditions such as routine...

  12. Distribution and rate of microbial processes in ammonia-loaded air filter biofilm

    DEFF Research Database (Denmark)

    Juhler, Susanne; Nielsen, Lars Peter; Schramm, Andreas

    2009-01-01

    The in situ activity and distribution of heterotrophic and nitrifying bacteria and their potential interactions were investigated in a full-scale, two-section, trickling filter designed for biological degradation of volatile organics and NH3 in ventilation air from pig farms. The filter biofilm w...... with heterotrophic bacteria for O2 and inhibition by the protonated form of NO2-, HNO2. Product inhibition of AOB growth could explain why this type of filter tends to emit air with a rather constant NH3 concentration irrespective of variations in inlet concentration and airflow....

  13. The process of developing distributed-efficacy and social practice in the context of ‘ending AIDS’

    Directory of Open Access Journals (Sweden)

    Christopher Burman

    2015-07-01

    Full Text Available Introduction: this article reflects on data that emanated from a programme evaluation and focuses on a concept we label ‘distributed-efficacy’. We argue that the process of developing and sustaining ‘distributed-efficacy’ is complex and indeterminate, thus difficult to manage or predict. We situate the discussion within the context of UNAIDS’ recent strategy — Vision 95:95:95 — to ‘end AIDS’ by 2030 which the South African National Department of Health is currently rolling out across the country. Method: A qualitative method was applied. It included a Value Network Analysis, the Most Significant Change technique and a thematic content analysis of factors associated with a ‘competent community’ model. During the analysis it was noticed that there were unexpected references to a shift in social relations. This prompted a re-analysis of the narrative findings using a second thematic content analysis that focused on factors associated with complexity science, the environmental sciences and shifts is social relations. Findings: the efficacy associated with new social practices relating to HIV risk-reduction was distributed amongst networks that included mother—son networks and participant—facilitator networks and included a shift in social relations within these networks. Discussion: it is suggested that for new social practices to emerge requires the establishment of ‘distributed-efficacy’ which facilitates localised social sanctioning, sometimes including shifts in social relations, and this process is a ‘complex’, dialectical interplay between ‘agency’ and ‘structure’. Conclusion: the ambition of ‘ending AIDS’ by 2030 represents a compressed timeframe that will require the uptake of multiple new bio-social practises. This will involve many nonlinear, complex challenges and the process of developing ‘distributed-efficacy’ could play a role in this process. Further research into the factors we

  14. Distribution and stability of Aflatoxin M1 during processing and ripening of traditional white pickled cheese.

    Science.gov (United States)

    Oruc, H H; Cibik, R; Yilmaz, E; Kalkanli, O

    2006-02-01

    The distribution of aflatoxin M(1) (AFM(1)) has been studied between curd, whey, cheese and pickle samples of Turkish white pickled cheese produced according to traditional techniques and its stability studied during the ripening period. Cheeses were produced in three cheese-making trials using raw milk that was artificially contaminated with AFM(1) at the levels of 50, 250 and 750 ng/l and allowed to ripen for three months. AFM(1) determinations were carried out at intervals by LC with fluorescence detection after immunoaffinity column clean-up. During the syneresis of the cheese a proportionately high concentration of AFM(1) remained in curd and for each trial the level was 3.6, 3.8 and 4.0 times higher than levels in milk. At the end of the ripening, the distribution of AFM(1) for cheese/whey + brine samples was 0.9, 1.0 and 1.3 for first, second and third spiking respectively indicating that nearly half of the AFM(1) remained in cheese. It has been found that only 2-4% of the initial spiking of AFM(1) transferred into the brine solution. During the ripening period AFM(1) levels remained constant suggesting that AFM(1) was quite stable during manufacturing and ripening.

  15. The effect of EIF dynamics on the cryopreservation process of a size distributed cell population.

    Science.gov (United States)

    Fadda, S; Briesen, H; Cincotti, A

    2011-06-01

    Typical mathematical modeling of cryopreservation of cell suspensions assumes a thermodynamic equilibrium between the ice and liquid water in the extracellular solution. This work investigates the validity of this assumption by introducing a population balance approach for dynamic extracellular ice formation (EIF) in the absence of any cryo-protectant agent (CPA). The population balance model reflects nucleation and diffusion-limited growth in the suspending solution whose driving forces are evaluated in the relevant phase diagram. This population balance description of the extracellular compartment has been coupled to a model recently proposed in the literature [Fadda et al., AIChE Journal, 56, 2173-2185, (2010)], which is capable of quantitatively describing and predicting internal ice formation (IIF) inside the cells. The cells are characterized by a size distribution (i.e. through another population balance), thus overcoming the classic view of a population of identically sized cells. From the comparison of the system behavior in terms of the dynamics of the cell size distribution it can be concluded that the assumption of a thermodynamic equilibrium in the extracellular compartment is not always justified. Depending on the cooling rate, the dynamics of EIF needs to be considered. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    Science.gov (United States)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  17. Concentration and distribution of elements in plants and soils near phosphate processing factories, Pocatello, Idaho

    International Nuclear Information System (INIS)

    Severson, R.C.; Gough, L.P.

    1976-01-01

    The processing of phosphatic shale near Pocatello, Idaho has a direct influence on the element content of local vegetation and soil. Samples of big sagebrush (Artemisia tridentata Nutt. subsp. tridentata) and cheatgrass (Bromus tectorum L.) show important negative relations between the concentration of certain elements (Cd, Cr, F, Ni, P, Se, U, V, and Zn) and distance from phosphate processing factories. Plant tissues within 3 km of the processing factories contain unusually high amounts of these elements except Ni and Se. Important negative relations with distance were also found for certain elements (Be, F, Fe, K, Li, Pb, Rb, Th, and Zn) in A-horizon soil. Amounts of seven elements (Be, F, Li, Pb, Rb, Th, and Zn) being contributed to the upper 5 cm of the soil by phosphate processing, as well as two additional elements (U and V) suspected as being contributed to soil, were estimated, with F showing the greatest increase (about 300 kg/ha) added to soils as far as 4 km downwind from the factories. The greatest number of important relations for both plants and soils was found downwind (northeast) of the processing factories

  18. Effect of processing conditions on residual stress distributions by bead-on-plate welding after surface machining

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Mochizuki, Masahito

    2014-01-01

    Residual stress is important factor for stress corrosion cracking (SCC) that has been observed near the welded zone in nuclear power plants. Especially, surface residual stress is significant for SCC initiation. In the joining processes of pipes, butt welding is conducted after surface machining. Residual stress is generated by both processes, and residual stress distribution due to surface machining is varied by the subsequent butt welding. In previous paper, authors reported that residual stress distribution generated by bead on plate welding after surface machining has a local maximum residual stress near the weld metal. The local maximum residual stress shows approximately 900 MPa that exceeds the stress threshold for SCC initiation. Therefore, for the safety improvement of nuclear power plants, a study on the local maximum residual stress is important. In this study, the effect of surface machining and welding conditions on residual stress distribution generated by welding after surface machining was investigated. Surface machining using lathe machine and bead on plate welding with tungsten inert gas (TIG) arc under various conditions were conducted for plate specimens made of SUS316L. Then, residual stress distributions were measured by X-ray diffraction method (XRD). As a result, residual stress distributions have the local maximum residual stress near the weld metal in all specimens. The values of the local maximum residual stresses are almost the same. The location of the local maximum residual stress is varied by welding condition. It could be consider that the local maximum residual stress is generated by same generation mechanism as welding residual stress in surface machined layer that has high yield stress. (author)

  19. Spatial patterns in the distribution of kimberlites: relationship to tectonic processes and lithosphere structure

    DEFF Research Database (Denmark)

    Chemia, Zurab; Artemieva, Irina; Thybo, Hans

    2015-01-01

    Since the discovery of diamonds in kimberlite-type rocks more than a century ago, a number of theories regarding the processes involved in kimberlite emplacement have been put forward to explain the unique properties of kimberlite magmatism. Geological data suggests that pre-existing lithosphere...... of establishing characteristic scales for the stage 1 and stage 2 processes. To reveal similarities between the Kimberlite data we use the density-based clustering technique, such as density-based spatial clustering of applications with noise (DBSCAN), which is efficient for large data sets, requires one input...

  20. A cloud based infrastructure to support business process analytics on highly distributed environments

    OpenAIRE

    Vera Baquero, Alejandro

    2015-01-01

    Mención Internacional en el título de doctor Business process improvement can drastically influence in the profit of corporations and helps them to remain viable in a slowdown economy such as the present one. For companies to remain viable in the face of intense global competition, they must be able to continuously improve their processes in order to respond rapidly to changing business environments. In this regard, the analysis of data plays an important role for improving and optimizing ...

  1. Distribution of tritium in a nuclear process heat plant with HTR

    International Nuclear Information System (INIS)

    Steinwarz, W.; Stoever, D.; Hecker, R.; Thiele, W.

    1984-01-01

    The application of HTR-process heat in chemical processes involves low contamination of the product by tritium permeation through the heat exchanger walls. According to conservative assumptions for the tritium release rate and based on experimental permeation data of the German R und D-program a tritium concentration in the PNP-product gas of about 10 pCi/g was calculated. The domestic use of the product gas in unvented kitchen ranges as the most important direct radiation exposure pathway then leads to an effective equivalent radiation dose of only 20 μrem/a. (orig.)

  2. Decision Process to Identify Lessons for Transition to a Distributed (or Blended) Learning Instructional Format

    Science.gov (United States)

    2009-09-01

    N/A CATD /Tactics Welcome No Administrative N/A B-46 In-Processing No Administrative N/A Book Issue No Administrative N/A CIF Issue No...A)synchronous CATD WELCOME Blended/Partial Social Synchronous OUTPROCESSING PROCEDURES No Administrative N/A COMMAND PHOTOGRAPHS No Social N/A...CG’S PHILOSOPHY LUNCHEON No Social N/A B-54 IPCC SOCIAL No Social N/A ADMINISTRATIVE PROCESSING No Administrative N/A CATD LEADERSHIP LUNCHEON No

  3. Distributed Processing of PIV images with a low power cluster supercomputer

    Science.gov (United States)

    Smith, Barton; Horne, Kyle; Hauser, Thomas

    2007-11-01

    Recent advances in digital photography and solid-state lasers make it possible to acquire images at up to 3000 frames per second. However, as the ability to acquire large samples very quickly has been realized, processing speed has not kept pace. A 2-D Particle Image Velocimetry (PIV) acquisition computer would require over five hours to process the data that can be acquired in one second with a Time-resolved Stereo PIV (TRSPIV) system. To decrease the computational time, parallel processing using a Beowulf cluster has been applied. At USU we have developed a low-power Beowulf cluster integrated with the data acquisition system of a TRSPIV system. This approach of integrating the PIV system and the Beowulf cluster eliminates the communication time, thus speeding up the process. In addition to improving the practicality of TRSPIV, this system will also be useful to researchers performing any PIV measurement where a large number of samples are required. Our presentation will describe the hardware and software implementation of our approach.

  4. Spatial Patterns in Distribution of Kimberlites: Relationship to Tectonic Processes and Lithosphere Structure

    DEFF Research Database (Denmark)

    Chemia, Zurab; Artemieva, Irina; Thybo, Hans

    2014-01-01

    Since the discovery of diamonds in kimberlite-type rocks more than a century ago, a number of theories regarding the processes involved in kimberlite emplacement have been put forward to explain the unique properties of kimberlite magmatism. Geological data suggests that pre-existing lithosphere...

  5. Birth-death processes with killing : orthogonal polynomials and quasi-stationary distributions

    NARCIS (Netherlands)

    Coolen-Schrijner, Pauline; van Doorn, Erik A.

    2005-01-01

    The Karlin-McGregor representation for the transition probabilities of a birth-death process with an absorbing bottom state involves a sequence of orthogonal polynomials and the corresponding measure. This representation can be generalized to a setting in which a transition to the absorbing state

  6. Quasi-stationary distributions for birth-death processes with killing

    NARCIS (Netherlands)

    Coolen-Schrijner, Pauline; van Doorn, Erik A.

    2006-01-01

    The Karlin-McGregor representation for the transition probabilities of a birth-death process with an absorbing bottom state involves a sequence of orthogonal polynomials and the corresponding measure. This representation can be generalized to a setting in which a transition to the absorbing state

  7. Start Time and Duration Distribution Estimation in Semi-Structured Processes

    NARCIS (Netherlands)

    Wombacher, Andreas; Iacob, Maria Eugenia

    Semi-structured processes are business workflows, where the execution of the workflow is not completely controlled by a workflow engine, i.e., an implementation of a formal workflow model. Examples are workflows where actors potentially have interaction with customers reporting the result of the

  8. Conditions for the existence of quasi-stationary distributions for birth-death processes with killing

    NARCIS (Netherlands)

    van Doorn, Erik A.

    We consider birth-death processes on the nonnegative integers, where $\\{1,2,...\\}$ is an irreducible class and $0$ an absorbing state, with the additional feature that a transition to state $0$ (killing) may occur from any state. Assuming that absorption at $0$ is certain we are interested in

  9. Systematic Modelling and Crystal Size Distribution Control for Batch Crystallization Processes

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Singh, Ravendra; Sin, Gürkan

    Crystallization processes form an important class of separation methods that are frequently used in the chemical, the pharmaceutical and the food industry. The specifications of the crystal product are usually given in terms of crystal size, shape and purity. In order to predict the desired cryst...

  10. Analysis of Production and Distribution Logistics Processes in Verana, s.r.o.

    OpenAIRE

    Musial, Hubert

    2010-01-01

    The objective of this thesis is to analyze logistics processes of company Verana, s.r.o. both from the cost and feasibility perspective and using this analysis, the thesis investigates the possibility of outsourcing logistics by using a third party, that is a logistics company. Both advantages and disadvantages are addressed and compared to the current state of logistics system in the company.

  11. Knowledge processes, distribution and alignment: Spatio-materialities and transformations in MRI praxis

    DEFF Research Database (Denmark)

    Yoshinaka, Yutaka

    2002-01-01

    may have their origins (the particular sociotechnical contingencies as well as time of their initial materialization) far removed from the very scanning session in question, they enter into its particular make-up and enactment in and through their relationality in the collective of practice at hand......, in light of the distributed work and effects that characterize their materialization and articulation, both within—and subsequent to—the immediate confines of such production and use. The paper is based on an ethnographic study of a relatively routinized MRI practice at a neuroradiology department...... (PACS). Paradoxically, in spite of MRI being a digital modality, the very sociomaterial contingencies of IT and electronic media come to have significant bearing on the spaces in which MRI’s can be articulated in practice through ‘filmless’ radiology....

  12. Depth distribution and abundance of a coral-associated reef fish: roles of recruitment and post-recruitment processes

    Science.gov (United States)

    Smallhorn-West, Patrick F.; Bridge, Tom C. L.; Munday, Philip L.; Jones, Geoffrey P.

    2017-03-01

    The abundance of many reef fish species varies with depth, but the demographic processes influencing this pattern remain unclear. Furthermore, while the distribution of highly specialized reef fish often closely matches that of their habitat, it is unclear whether changes in distribution patterns over depth are the result of changes in habitat availability or independent depth-related changes in population parameters such as recruitment and mortality. Here, we show that depth-related patterns in the distribution of the coral-associated goby, Paragobiodon xanthosoma, are strongly related to changes in recruitment and performance (growth and survival). Depth-stratified surveys showed that while the coral host, Seriatopora hystrix, extended into deeper water (>20 m), habitat use by P. xanthosoma declined with depth and both adult and juvenile P. xanthosoma were absent below 20 m. Standardization of S. hystrix abundance at three depths (5, 15 and 30 m) demonstrated that recruitment of P. xanthosoma was not determined by the availability of its habitat. Reciprocal transplantation of P. xanthosoma to S. hystrix colonies among three depths (5, 15 and 30 m) then established that individual performance (survival and growth) was lowest in deeper water; mortality was three times higher and growth greatly reduced in individuals transplanted to 30 m. Individuals collected from 15 m also exhibited growth rates 50% lower than fish from shallow depths. These results indicate that the depth distribution of this species is limited not by the availability of its coral habitat, but by demographic costs associated with living in deeper water.

  13. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  14. Controlling the length scale and distribution of the ductile phase in metallic glass composites through friction stir processing.

    Science.gov (United States)

    Arora, Harpreet Singh; Mridha, Sanghita; Grewal, Harpreet Singh; Singh, Harpreet; Hofmann, Douglas C; Mukherjee, Sundeep

    2014-06-01

    We demonstrate the refinement and uniform distribution of the crystalline dendritic phase by friction stir processing (FSP) of titanium based in situ ductile-phase reinforced metallic glass composite. The average size of the dendrites was reduced by almost a factor of five (from 24 μ m to 5 μ m) for the highest tool rotational speed of 900 rpm. The large inter-connected dendrites become more fragmented with increased circularity after processing. The changes in thermal characteristics were measured by differential scanning calorimetry. The reduction in crystallization enthalpy after processing suggests partial devitrification due to the high strain plastic deformation. FSP resulted in increased hardness and modulus for both the amorphous matrix and the crystalline phase. This is explained by interaction of shear bands in amorphous matrix with the strain-hardened dendritic phase. Our approach offers a new strategy for microstructural design in metallic glass composites.

  15. 137Cs distribution in soil as a function of erosion and other processes

    International Nuclear Information System (INIS)

    Villar, H.P.

    1981-08-01

    Nuclear weapons tests have deposited upon the Saskatoon area considerable amounts of the fission product 137 Cs. The average concentration for the area was found to be 67,3 nCi/m 2 and its distribuition upon the area as a whole is quite uniform. The above figure is high enough to allow the evaluation of deviations from it, caused by erosion and deposition processes; 137 Cs losses as high as 34% were observed in knolls of cultivated fields, whereas an increase of 95% over the average for the region was found in depressions. Such results favour the use of fallout 137 Cs as a tracer in the study of physical processes in the soil of the Saskatoon region. (Author) [pt

  16. The study of sub-surface damage distributions during grinding process on different abrasion materials

    Science.gov (United States)

    Kuo, Ching-Hsiang; Huang, Chien-Yao; Yu, Zong-Ru; Shu, Shyu-Cheng; Chang, Keng-Shou; Hsu, Wei-Yao

    2017-10-01

    The grinding process is the primary technology for curvature generation (CG) on glass optics. The higher material removal rate (MRR) leads to deeper sub-surface damage (SSD) on lens surface. The SSD must be removed by following lapping and polishing processes to ensure the lens quality. However, these are not an easy and an efficient process to remove the SSD from ground surface directly for aspheric surfaces with tens or hundreds microns departure from bestfit- sphere (BFS). An efficient fabrication procedure for large aspheric departure on glass materials must be considered. We propose 3-step fabrication procedures for aspheric surface with larger departure. 1st step is to generate a specific aspheric surface with depth less than 10 μm of SSD residual. 2nd step is to remove SSD and keep the aspheric form by using Zeeko polisher with higher MRR pad. Final step is to figure and finish the aspheric surface by using QED MRF machine. In this study, we focus on the 1st step to investigate the residual depth of SSD after grinding process on different abrasion materials. The materials of tested part are fused silica, S-NPH2, and S-PHM52. The cross grinding would be configured and depth of SSD/surface roughness would be evaluated in this study. The characteristic of SSD could be observed after etching by confocal microscope. The experimental results show the depth of SSD below 31.1 μm with #400 grinding wheel. And the near 10 μm depth of SSD would be achieved with #1,000 grinding wheel. It means the aspherization polishing on large parts with large departure from best fit sphere would be replaced. The fabrication of large aspheric part would be efficient.

  17. Processing and Characterization of a Novel Distributed Strain Sensor Using Carbon Nanotube-Based Nonwoven Composites

    OpenAIRE

    Dai, Hongbo; Thostenson, Erik T.; Schumacher, Thomas

    2015-01-01

    This paper describes the development of an innovative carbon nanotube-based non-woven composite sensor that can be tailored for strain sensing properties and potentially offers a reliable and cost-effective sensing option for structural health monitoring (SHM). This novel strain sensor is fabricated using a readily scalable process of coating Carbon nanotubes (CNT) onto a nonwoven carrier fabric to form an electrically-isotropic conductive network. Epoxy is then infused into the CNT-modified ...

  18. A Widely-Accessible Distributed MEMS Processing Environment. The MEMS Exchange Program

    Science.gov (United States)

    2012-10-29

    steadily rising for several years, our revenue began to drop in 2009, we think mostly due to the bad economy. In years past we would normally see a...the MEMS Materials and Processing Handbook, editors R. Ghodssi and P. Lin, Springer Press, New York, 2011. [3] BioMEMS and Biomaterials for...Medical Applications Biomaterials Science: An Integrated Clinical and Engineering Approach, edited by Yitzhak Rosen and Noel Elman, CRC Press, Boca

  19. Distributed Nd-YAG laser welding and process control in inert glove boxes

    International Nuclear Information System (INIS)

    Milewski, J.O.; Lewis, G.K.; Barbe, M.R.; Cremers, D.A.

    1993-01-01

    We have fabricated and assembled a fiber optic delivered ND-YAG laser welding work station that consists of three glove boxes served by a single 1kw laser. Processing considerations related to the welding of special nuclear materials, toxic materials and complex part geometry are addressed within each work cell. We are proceeding with a development effort to integrate the equipment capabilities with remote sensing, process monitoring and control systems. These systems will provide real time data acquisition during welding, monitoring and verification of weld parameters, and CAD/CAM to CNC generated positioning paths. Computerized information storage, retrieval and network methods are used for weld process documentation and data analysis. A virtual control panel is being configured to integrate the monitoring and control operation of individual subsystems, such as laser and motion control into a single graphical interface. Development work on sensors to monitor laser beam characteristics and weld depth in real time with potential for adaptive control is in progress. System capabilities and results of these development efforts are presented

  20. Distribution of electrical energy consumption for the efficient degradation control of THMs mixture in sonophotolytic process.

    Science.gov (United States)

    Park, Beomguk; Cho, Eunju; Son, Younggyu; Khim, Jeehyeong

    2014-11-01

    Sonophotolytic degradation of THMs mixture with different electrical energy ratio was carried out for efficient design of process. The total consumed electrical energy was fixed around 50W, and five different energy conditions were applied. The maximum degradation rate showed in conditions of US:UV=1:3 and US:UV=0:4. This is because the photolytic degradation of bromate compounds is dominant degradation mechanism for THMs removal. However, the fastest degradation of total organic carbon was observed in a condition of US:UV=1:3. Because hydrogen peroxide generated by sonication was effectively dissociated to hydroxyl radicals by ultraviolet, the concentration of hydroxyl radical was maintained high. This mechanism provided additional degradation of organics. This result was supported by comparison between the concentration of hydrogen peroxide sole and combined process. Consequently, the optimal energy ratio was US:UV=1:3 for degradation of THMs in sonophotolytic process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. A CFD model for analysis of performance, water and thermal distribution, and mechanical related failure in PEM fuel cells

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2016-07-01

    Full Text Available This paper presents a comprehensive three–dimensional, multi–phase, non-isothermal model of a Proton Exchange Membrane (PEM fuel cell that incorporates significant physical processes and key parameters affecting the fuel cell performance. The model construction involves equations derivation, boundary conditions setting, and solution algorithm flow chart. Equations in gas flow channels, gas diffusion layers (GDLs, catalyst layers (CLs, and membrane as well as equations governing cell potential and hygro-thermal stresses are described. The algorithm flow chart starts from input of the desired cell current density, initialization, iteration of the equations solution, and finalizations by calculating the cell potential. In order to analyze performance, water and thermal distribution, and mechanical related failure in the cell, the equations are solved using a computational fluid dynamic (CFD code. Performance analysis includes a performance curve which plots the cell potential (Volt against nominal current density (A/cm2 as well as losses. Velocity vectors of gas and liquid water, liquid water saturation, and water content profile are calculated. Thermal distribution is then calculated together with hygro-thermal stresses and deformation. The CFD model was executed under boundary conditions of 20°C room temperature, 35% relative humidity, and 1 MPA pressure on the lower surface. Parameters values of membrane electrode assembly (MEA and other base conditions are selected. A cell with dimension of 1 mm x 1 mm x 50 mm is used as the object of analysis. The nominal current density of 1.4 A/cm2 is given as the input of the CFD calculation. The results show that the model represents well the performance curve obtained through experiment. Moreover, it can be concluded that the model can help in understanding complex process in the cell which is hard to be studied experimentally, and also provides computer aided tool for design and optimization of PEM

  2. A quasi-online distributed data processing on WAN: the ATLAS muon calibration system

    CERN Document Server

    De Salvo, A; The ATLAS collaboration

    2013-01-01

    In the Atlas experiment, the calibration of the precision tracking chambers of the muon detector is very demanding, since the rate of muon tracks required to get a complete calibration in homogeneous conditions and to feed prompt reconstruction with fresh constants is very high (several hundreds Hz for 8-10 hours runs). The calculation of calibration constants is highly CPU consuming. In order to fulfill the requirement of completing the cycle and having the final constants available within 24 hours, distributed resources at Tier-2 centers have been allocated. The best place to get muon tracks suitable for detector calibration is the second level trigger, where the pre-selection of data sitting in a limited region by the first level trigger via the Region of Interest mechanism allows selecting all the hits from a single track in a limited region of the detector. Online data extraction allows calibration data collection without performing special runs. Small event pseudo-fragments (about 0.5 kB) built at the m...

  3. Physical Processes Affecting the Distribution of Diydymosphenia Geminata Biomass Bloom in Rapid Creek, South Dakota

    Science.gov (United States)

    Abessa, M. B.; Sundareshwar, P. V.; Updhayay, S.

    2010-12-01

    Didymosphenia geminata is a freshwater diatom that has invaded and colonized many of the world’s oligotrophic streams and rivers, including Rapid Creek in Western South Dakota - a perennial oligotrophic stream that emerges from the Black Hills and is fed by cold water release from the Pactola Reservoir. Since 2002, D. geminata blooms have been observed in certain stretches of the Rapid Creek. These massive blooms are localized to certain segments of the Creek where the flow is mainly slow, stable and shallow dominated by boulder type bed material and submerged large woody debris. Water chemistry data from this Creek showed the variability of major nutrients such as phosphate, nitrates/nitrites and ammonium are insignificant across our study sites while the nature of the stream flow is quite irregular. We measured flow rates, depth, temperature, stream bed characteristics, water chemistry, and D. geminata biomass in regions with and without blooms. The presentation will discuss how changes in physical parameters along the various reaches of the Creek impact the biomass distribution of this invasive alga.

  4. AIDA – Seismic data acquisition, processing, storage and distribution at the National Earthquake Center, INGV

    Directory of Open Access Journals (Sweden)

    Salvatore Mazza

    2012-10-01

    Full Text Available On May 4, 2012, a new system, known as the AIDA (Advanced Information and Data Acquisition system for seismology, became operational as the primary tool to monitor, analyze, store and distribute seismograms from the Italian National Seismic Network. Only 16 days later, on May 20, 2012, northern Italy was struck by a Ml 5.9 earthquake that caused seven casualties. This was followed by numerous small to moderate earthquakes, with some over Ml 5. Then, on May 29, 2012, a Ml 5.8 earthquake resulted in 17 more victims and left about 14,000 people homeless. This sequence produced more than 2,100 events over 40 days, and it was still active at the end of June 2012, with minor earthquakes at a rate of about 20 events per day. The new AIDA data management system was designed and implemented, among other things, to exploit the recent huge upgrade of the Italian Seismic Network (in terms of the number and quality of stations and to overcome the limitations of the previous system.

  5. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  6. A framework for analysis of abortive colony size distributions using a model of branching processes in irradiated normal human fibroblasts.

    Directory of Open Access Journals (Sweden)

    Tetsuya Sakashita

    Full Text Available Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes.We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (<3 generations and late phases. Intriguingly, the survival curve was sensitive to the excess probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction.Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.

  7. Numerical simulation of the laser welding process for the prediction of temperature distribution on welded aluminium aircraft components

    Science.gov (United States)

    Tsirkas, S. A.

    2018-03-01

    The present investigation is focused to the modelling of the temperature field in aluminium aircraft components welded by a CO2 laser. A three-dimensional finite element model has been developed to simulate the laser welding process and predict the temperature distribution in T-joint laser welded plates with fillet material. The simulation of the laser beam welding process was performed using a nonlinear heat transfer analysis, based on a keyhole formation model analysis. The model employs the technique of element ;birth and death; in order to simulate the weld fillet. Various phenomena associated with welding like temperature dependent material properties and heat losses through convection and radiation were accounted for in the model. The materials considered were 6056-T78 and 6013-T4 aluminium alloys, commonly used for aircraft components. The temperature distribution during laser welding process has been calculated numerically and validated by experimental measurements on different locations of the welded structure. The numerical results are in good agreement with the experimental measurements.

  8. Rapid Measurements of Aerosol Size Distribution and Hygroscopic Growth via Image Processing with a Fast Integrated Mobility Spectrometer (FIMS)

    Science.gov (United States)

    Wang, Y.; Pinterich, T.; Spielman, S. R.; Hering, S. V.; Wang, J.

    2017-12-01

    Aerosol size distribution and hygroscopicity are among key parameters in determining the impact of atmospheric aerosols on global radiation and climate change. In situ submicron aerosol size distribution measurements commonly involve a scanning mobility particle sizer (SMPS). The SMPS scanning time is in the scale of minutes, which is often too slow to capture the variation of aerosol size distribution, such as for aerosols formed via nucleation processes or measurements onboard research aircraft. To solve this problem, a Fast Integrated Mobility Spectrometer (FIMS) based on image processing was developed for rapid measurements of aerosol size distributions from 10 to 500 nm. The FIMS consists of a parallel plate classifier, a condenser, and a CCD detector array. Inside the classifier an electric field separates charged aerosols based on electrical mobilities. Upon exiting the classifier, the aerosols pass through a three stage growth channel (Pinterich et al. 2017; Spielman et al. 2017), where aerosols as small as 7 nm are enlarged to above 1 μm through water or heptanol condensation. Finally, the grown aerosols are illuminated by a laser sheet and imaged onto a CCD array. The images provide both aerosol concentration and position, which directly relate to the aerosol size distribution. By this simultaneous measurement of aerosols with different sizes, the FIMS provides aerosol size spectra nearly 100 times faster than the SMPS. Recent deployment onboard research aircraft demonstrated that the FIMS is capable of measuring aerosol size distributions in 1s (Figure), thereby offering a great advantage in applications requiring high time resolution (Wang et al. 2016). In addition, the coupling of the FIMS with other conventional aerosol instruments provides orders of magnitude more rapid characterization of aerosol optical and microphysical properties. For example, the combination of a differential mobility analyzer, a relative humidity control unit, and a FIMS was

  9. A Life Cycle Assessment on a Fuel Production Through Distributed Biomass Gasification Process

    Science.gov (United States)

    Dowaki, Kiyoshi; Eguchi, Tsutomu; Ohkubo, Rui; Genchi, Yutaka

    In this paper, we estimated life cycle inventories (energy intensities and CO2 emissions) on the biomass gasification CGS, Bio-H2, Bio-MeOH (methanol) and Bio-DME (di-methyl ether), using the bottom-up methodology. CO2 emissions and energy intensities on material's chipping, transportation and dryer operation were estimated. Also, the uncertainties on the moisture content of biomass materials and the transportation distance to the plant were considered by the Monte Carlo simulation. The energy conversion system was built up by gasification through the BLUE Tower process, with either CGS, PSA (Pressure Swing Absorption) system or the liquefaction process. In our estimation, the biomass materials were the waste products from Japanese Cedar. The uncertainties of moisture content and transportation distance were assumed to be 20 to 50 wt.% and 5 to 50 km, respectively. The capability of the biomass gasification plant was 10 t-dry/d, that is, an annual throughput of 3,000 t-dry/yr. The production energy in each case was used as a functional unit. Finally, the energy intensities of 1.12 to 3.09 MJ/MJ and CO2 emissions of 4.79 to 88.0 g-CO2/MJ were obtained. CGS case contributes to the environmental mitigation, and Bio-H2 and/or Bio-DME cases have a potential to reduce CO2 emissions, compared to the conventional ones.

  10. Cloudwave: distributed processing of "big data" from electrophysiological recordings for epilepsy clinical research using Hadoop.

    Science.gov (United States)

    Jayapandian, Catherine P; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D; Zhang, Guo-Qiang; Sahoo, Satya S

    2013-01-01

    Epilepsy is the most common serious neurological disorder affecting 50-60 million persons worldwide. Multi-modal electrophysiological data, such as electroencephalography (EEG) and electrocardiography (EKG), are central to effective patient care and clinical research in epilepsy. Electrophysiological data is an example of clinical "big data" consisting of more than 100 multi-channel signals with recordings from each patient generating 5-10GB of data. Current approaches to store and analyze signal data using standalone tools, such as Nihon Kohden neurology software, are inadequate to meet the growing volume of data and the need for supporting multi-center collaborative studies with real time and interactive access. We introduce the Cloudwave platform in this paper that features a Web-based intuitive signal analysis interface integrated with a Hadoop-based data processing module implemented on clinical data stored in a "private cloud". Cloudwave has been developed as part of the National Institute of Neurological Disorders and Strokes (NINDS) funded multi-center Prevention and Risk Identification of SUDEP Mortality (PRISM) project. The Cloudwave visualization interface provides real-time rendering of multi-modal signals with "montages" for EEG feature characterization over 2TB of patient data generated at the Case University Hospital Epilepsy Monitoring Unit. Results from performance evaluation of the Cloudwave Hadoop data processing module demonstrate one order of magnitude improvement in performance over 77GB of patient data. (Cloudwave project: http://prism.case.edu/prism/index.php/Cloudwave).

  11. Feasibility of automated dropsize distributions from holographic data using digital image processing techniques. [particle diameter measurement technique

    Science.gov (United States)

    Feinstein, S. P.; Girard, M. A.

    1979-01-01

    An automated technique for measuring particle diameters and their spatial coordinates from holographic reconstructions is being developed. Preliminary tests on actual cold-flow holograms of impinging jets indicate that a suitable discriminant algorithm consists of a Fourier-Gaussian noise filter and a contour thresholding technique. This process identifies circular as well as noncircular objects. The desired objects (in this case, circular or possibly ellipsoidal) are then selected automatically from the above set and stored with their parametric representations. From this data, dropsize distributions as a function of spatial coordinates can be generated and combustion effects due to hardware and/or physical variables studied.

  12. Effects of heat stress on dynamic absorption process, tissue distribution and utilization efficiency of vitamin C in broilers

    International Nuclear Information System (INIS)

    Liu Guohua; Chen Guosheng; Cai Huiyi

    1998-01-01

    The experiment was conducted to determine the effects of heat stress on ascorbic acid nutritional physiology of broilers with radioisotope technology. 3 H-Vc was fed to broilers and then the blood, liver, kidney, breast muscle, and excreta were sampled to determine the dynamic absorption process, the tissue distribution and the utilization efficiency of vitamin C. The results indicated that the absorption, metabolism and mobilization of supplemented vitamin C in broilers with heat stress was faster than that in broilers without heat stress. However, the utilization efficiency of supplemented vitamin C in broilers with heat stress was not higher than that of broilers without heat stress

  13. Applicability of Agent-Based Technology for Acquisition, Monitoring and Process Control Systems at Real Time for Distributed Architectures

    International Nuclear Information System (INIS)

    Dorao, Carlos; Fontanini, H; Fernandez, R

    2000-01-01

    Modern industrial plants are characterized by their large size and higher complexity of the processes involved in their operations.The real time monitoring systems of theses plants must be used a distributed architecture.Due to the pressure of competitive markets, an efficient adaptability to changes must be present in the plants.Modifications in the plants due to changes in the lay-out, the introduction of newer supervision, control and monitoring technologies must not affect the integrity of the systems.The aim of this work is give an introduction to the agent-based technology and analyze it advantage for the development of a modern monitoring system

  14. Supplementary Material for: High-Order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2016-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  15. Development of an intelligent CAI system for a distributed processing environment

    International Nuclear Information System (INIS)

    Fujii, M.; Sasaki, K.; Ohi, T.; Itoh, T.

    1993-01-01

    In order to operate a nuclear power plant optimally in both normal and abnormal situations, the operators are trained using an operator training simulator in addition to classroom instruction. Individual instruction using a CAI (Computer-Assisted Instruction) system has become popular as a method of learning plant information, such as plant dynamics, operational procedures, plant systems, plant facilities, etc. The outline is described of a proposed network-based intelligent CAI system (ICAI) incorporating multi-medial PWR plant dynamics simulation, teaching aids and educational record management using the following environment: existing standard workstations and graphic workstations with a live video processing function, TCP/IP protocol of Unix through Ethernet and X window system. (Z.S.) 3 figs., 2 refs

  16. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2015-09-29

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  17. Search for the algorithm of genes distribution during the process of microbial evolution

    Science.gov (United States)

    Pikuta, Elena V.

    2015-09-01

    Previous two and three dimensional graph analysis of eco-physiological data of Archaea demonstrated specific geometry for distribution of major Prokaryotic groups in a hyperboloid function. The function of a two-sheet hyperboloid covered all known biological groups, and therefore, could be applied for the entire evolution of life on Earth. The vector of evolution was indicated from the point of hyper temperature, extreme acidity and low salinity to the point of low temperature and increased alkalinity and salinity. According to this vector, the following groups were chosen for the gene screening analysis. In the vector "High-Temperature → Low-Temperature" within extreme acidic pH (0-3), it is: 1) the hyperthermophilic Crenarchaeota - order Sulfolobales, 2) moderately thermophilic Euryarchaeota - Class Thermoplasmata, and 3) mesophilic acidophiles- genus Thiobacillus and others. In the vector "Low pH → High pH" the following groups were selected in three temperature ranges: a) Hyperthermophilic Archaea and Eubacteria, b) moderately thermophilic - representatives of the genera Anaerobacter and Anoxybacillus, and c) mesophilic haloalkaliphiles (Eubacteria and Archaea). The genes associated with acidophily (H+ pump), chemolitho-autotrophy (proteins of biochemichal cycles), polymerases, and histones were proposed for the first vector, and for the second vector the genes associated with halo-alkaliphily (Na+ pumps), enzymes of organotrophic metabolisms (sugar- and proteolytics), and others were indicated for the screening. Here, an introduction to the phylogenetic constant (ρη) is presented and discussed. This universal characteristic is calculated for two principally different life forms -Prokaryotes and Eukaryotes; Existence of the second type of living forms is impossible without the first one. The number of chromosomes in Prokaryotic organisms is limited to one (with very rare exceptions, to two), while in Eukaryotic organisms this number is larger. Currently

  18. Distributive On-line Processing, Visualization and Analysis System for Gridded Remote Sensing Data

    Science.gov (United States)

    Leptoukh, G.; Berrick, S.; Liu, Z.; Pham, L.; Rui, H.; Shen, S.; Teng, W.; Zhu, T.

    2004-01-01

    , development and processing tasks that are redundantly incurred by an archive's user community. The current implementation utilizes the GrADS-DODS Server (GDS), a stable, secure data server that provides subsetting and analysis services across the Internet for any GrADS-readable dataset. The subsetting capability allows users to retrieve a specified temporal and/or spatial subdomain from a large dataset, eliminating the need to download everything simply to access a small relevant portion of a dataset. The analysis capability allows users to retrieve the results of an operation applied to one or more datasets on the server. In our case, we use this approach to read pre-processed binary files and/or to read and extract the needed parts from HDF or HDF-EOS files. These subsets then serve as inputs into GrADS processing and analysis scripts. It can be used in a wide variety of Earth science applications: climate and weather events study and monitoring; modeling. It can be easily configured for new applications.

  19. DREM: Infinite etch selectivity and optimized scallop size distribution with conventional photoresists in an adapted multiplexed Bosch DRIE process

    DEFF Research Database (Denmark)

    Chang, Bingdong; Leussink, Pele; Jensen, Flemming

    2018-01-01

    The quest to sculpture materials as small and deep as possible is an ongoing topic in micro- and nanofabrication. For this, the Bosch process has been widely used to achieve anisotropic silicon microstructures with high aspect ratio. Reactive ion etching (RIE) lag is a phenomenon in which etch rate...... depends on the opening areas of patterns, aspect ratio of the trenches and other geometrical factors. The lag not only gives a non-uniform distribution of scallop size, but it also sets a limit for the maximum achievable aspect ratio. The latter since the mask suffers from persistent erosion. While...... different kinds of hard masks have been suggested to ensure a longer total etch time, here we report a correctly tuned 3-steps Bosch process - called DREM (Deposit, Remove, Etch, Multistep) – without mask erosion. The erosion-free feature is independent of the type of mask. For example, an aspect ratio...

  20. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    Science.gov (United States)

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  1. Mixture regression models for the gap time distributions and illness-death processes.

    Science.gov (United States)

    Huang, Chia-Hui

    2018-01-27

    The aim of this study is to provide an analysis of gap event times under the illness-death model, where some subjects experience "illness" before "death" and others experience only "death." Which event is more likely to occur first and how the duration of the "illness" influences the "death" event are of interest. Because the occurrence of the second event is subject to dependent censoring, it can lead to bias in the estimation of model parameters. In this work, we generalize the semiparametric mixture models for competing risks data to accommodate the subsequent event and use a copula function to model the dependent structure between the successive events. Under the proposed method, the survival function of the censoring time does not need to be estimated when developing the inference procedure. We incorporate the cause-specific hazard functions with the counting process approach and derive a consistent estimation using the nonparametric maximum likelihood method. Simulations are conducted to demonstrate the performance of the proposed analysis, and its application in a clinical study on chronic myeloid leukemia is reported to illustrate its utility.

  2. The study on the effect of pattern density distribution on the STI CMP process

    Science.gov (United States)

    Sub, Yoon Myung; Hian, Bernard Yap Tzen; Fong, Lee It; Anak, Philip Menit; Minhar, Ariffin Bin; Wui, Tan Kim; Kim, Melvin Phua Twang; Jin, Looi Hui; Min, Foo Thai

    2017-08-01

    The effects of pattern density on CMP characteristics were investigated using specially designed wafer for the characterization of pattern-dependencies in STI CMP [1]. The purpose of this study is to investigate the planarization behavior based on a direct STI CMP used in cerium (CeO2) based slurry system in terms of pattern density variation. The minimal design rule (DR) of 180nm generation technology node was adopted for the mask layout. The mask was successfully applied for evaluation of a cerium (CeO2) abrasive based direct STI CMP process. In this study, we described a planarization behavior of the loading-effects of pattern density variation which were characterized with layout pattern density and pitch variations using masks mentioned above. Furthermore, the characterizing pattern dependent on the variations of the dimensions and spacing features, in thickness remaining after CMP, were analyzed and evaluated. The goal was to establish a concept of library method which will be used to generate design rules reducing the probability of CMP-related failures. Details of the characterization were measured in various layouts showing different pattern density ranges and the effects of pattern density on STI CMP has been discussed in this paper.

  3. Sensitivity of microorganisms distributed through Japanese tea manufacturing process by radiation

    International Nuclear Information System (INIS)

    Ogura, M.; Kimura, S.; Sugimoto, Y.; Jo, N.; Nosaka, K.; Iwasaki, I.; Nishimoto, S.

    2002-01-01

    The number of bacteria adhering to Japanese tea is 10E7-10E8 cfu/g in picked fresh tea leaves (almost radiation irresistance), decreasing every heat-treatment in manufacturing process to 10E3-10E4 cfu/g in tea on the market (only radiation resistance). Still more, its decreasing one figures after half a year by effect of anti-bacterium component contained Japanese tea. The number of fungi adhering to almost samples is below 50cfu/g, but that adhering to some powdered tea is 10E2 cfu/g. At heat treatment (80degC, 15min), the number of bacteria decrease very little. The other side, by EB-irradiated treatment (2kGy), its below 10E2 cfu/g (D sub(10-)value; 1.4 ~ 3.8kGy). The needed dose to decrease 10E2 cfu/g is 0.9 ~ 2.5kGy

  4. Distributive Online Processing, Visualization and Analysis System for Gridded Remote Sensing Data

    Science.gov (United States)

    Leptoukh, G.; Liu, Z.; Pham, L.; Rui, H.; Shen, S.; Teng, W.; Zhu, T.

    2004-12-01

    The ability to use data stored in the current Earth Observing System (EOS) archives for studying regional or global phenomena is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time-consuming task that must be undertaken before the core investigation can begin. This is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets that are usually of different formats, structures, and resolutions. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step towards meeting this challenge by developing an infrastructure with a Web interface that allows users to perform interactive analysis online without downloading any data, the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni." Giovanni provides interactive, online, analysis tools for data users to facilitate their research. There have been several instances of this interface created to serve TRMM users, Aerosol scientists, Ocean Color and Agriculture applications users. The first generation of these tools support gridded data only. The user selects geophysical parameters, area of interest, time period; and the system generates an output on screen in a matter of seconds. The currently available output options are: Area plot averaged or accumulated over any available data period for any rectangular area; Time plot time series averaged over any rectangular area; Hovmoller plots image view of any longitude-time and latitude-time cross sections; ASCII output for all plot types; Image animation for area plot. In the future, correlation plots, GIS-compatible outputs, etc. This allow user to focus on data content (i.e. science parameters) and eliminate the need for expensive learning, development and processing tasks that are redundantly incurred by an archive's user

  5. Cellular distribution and function of ion channels involved in transport processes in rat tracheal epithelium.

    Science.gov (United States)

    Hahn, Anne; Faulhaber, Johannes; Srisawang, Lalita; Stortz, Andreas; Salomon, Johanna J; Mall, Marcus A; Frings, Stephan; Möhrlen, Frank

    2017-06-01

    Transport of water and electrolytes in airway epithelia involves chloride-selective ion channels, which are controlled either by cytosolic Ca 2+ or by cAMP The contributions of the two pathways to chloride transport differ among vertebrate species. Because rats are becoming more important as animal model for cystic fibrosis, we have examined how Ca 2+ - dependent and cAMP- dependent Cl - secretion is organized in the rat tracheal epithelium. We examined the expression of the Ca 2+ -gated Cl - channel anoctamin 1 (ANO1), the cystic fibrosis transmembrane conductance regulator (CFTR) Cl - channel, the epithelial Na + channel ENaC, and the water channel aquaporin 5 (AQP5) in rat tracheal epithelium. The contribution of ANO1 channels to nucleotide-stimulated Cl - secretion was determined using the channel blocker Ani9 in short-circuit current recordings obtained from primary cultures of rat tracheal epithelial cells in Ussing chambers. We found that ANO1, CFTR and AQP5 proteins were expressed in nonciliated cells of the tracheal epithelium, whereas ENaC was expressed in ciliated cells. Among nonciliated cells, ANO1 occurred together with CFTR and Muc5b and, in addition, in a different cell type without CFTR and Muc5b. Bioelectrical studies with the ANO1-blocker Ani9 indicated that ANO1 mediated the secretory response to the nucleotide uridine-5'-triphosphate. Our data demonstrate that, in rat tracheal epithelium, Cl - secretion and Na + absorption are routed through different cell types, and that ANO1 channels form the molecular basis of Ca 2+ -dependent Cl - secretion in this tissue. These characteristic features of Cl - -dependent secretion reveal similarities and distinct differences to secretory processes in human airways. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  6. HPLC determination of strychnine and brucine in rat tissues and the distribution study of processed semen strychni.

    Science.gov (United States)

    Chen, Jun; Hou, Ting; Fang, Yun; Chen, Zhi-peng; Liu, Xiao; Cai, Hao; Lu, Tu-lin; Yan, Guo-jun; Cai, Bao-chang

    2011-01-01

    A simple and low-cost HPLC method with UV absorbance detection was developed and validated to simultaneously determine strychnine and brucine, the most abundant alkaloids in the processed Semen Strychni, in rat tissues (kidney, liver, spleen, lung, heart, stomach, small intestine, brain and plasma). The tissue samples were treated with a simple liquid-liquid extraction prior to HPLC. The LOQs were in the range of 0.039-0.050 µg/ml for different tissue or plasma samples. The extraction recoveries varied from 71.63 to 98.79%. The linear range was 0.05-2 µg/ml with correlation coefficient of over 0.991. The intra- and inter-day precision was less than 15%. Then the method was used to measure the tissue distribution of strychnine and brucine after intravenous administration of 1 mg/kg crude alkaloids fraction (CAF) extracted from the processed Semen Strychni. The results revealed that strychnine and brucine possessed similar tissue distribution characterization. The highest level was observed in kidney, while the lowest level was found in brain. It was indicated that kidney might be the primary excretion organ of prototype strychnine and brucine. It was also deduced that strychnine and brucine had difficulty in crossing the blood-brain barrier. Furthermore, no long-term accumulation of strychnine and brucine was found in rat tissues.

  7. Isothiocyanate metabolism, distribution, and interconversion in mice following consumption of thermally processed broccoli sprouts or purified sulforaphane.

    Science.gov (United States)

    Bricker, Gregory V; Riedl, Kenneth M; Ralston, Robin A; Tober, Kathleen L; Oberyszyn, Tatiana M; Schwartz, Steven J

    2014-10-01

    Broccoli sprouts are a rich source of glucosinolates, a group of phytochemicals that when hydrolyzed, are associated with cancer prevention. Our objectives were to investigate the metabolism, distribution, and interconversion of isothiocyanates (ITCs) in mice fed thermally processed broccoli sprout powders (BSPs) or the purified ITC sulforaphane. For 1 wk, mice were fed a control diet (n = 20) or one of four treatment diets (n = 10 each) containing nonheated BSP, 60°C mildly heated BSP, 5-min steamed BSP, or 3 mmol purified sulforaphane. Sulforaphane and erucin metabolite concentrations in skin, liver, kidney, bladder, lung, and plasma were quantified using HPLC-MS/MS. Thermal intensity of BSP processing had disparate effects on ITC metabolite concentrations upon consumption. Mild heating generally resulted in the greatest ITC metabolite concentrations in vivo, followed by the nonheated and steamed BSP diets. We observed interconversion between sulforaphane and erucin species or metabolites, and report that erucin is the favored form in liver, kidney, and bladder, even when only sulforaphane is consumed. ITC metabolites were distributed to all tissues analyzed, suggesting the potential for systemic benefits. We report for the first time tissue-dependent ratio of sulforaphane and erucin, though further investigation is warranted to assess biological activity of individual forms. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Process control and dosimetry applied to establish a relation between reference dose measurements and actual dose distribution

    International Nuclear Information System (INIS)

    Ehlerman, D.A.E.

    2001-01-01

    The availability of the first commercial dose level indicator prompted attempts to verify radiation absorbed dose to items under quarantine control (e.g. for insect disinfestation) by some indicator attached to these items. Samples of the new commercial dose level indicators were tested for their metrological properties using gamma and electron irradiation. The devices are suitable for the intended purpose and the subjective judgement whether the threshold dose was surpassed is possible in a reliable manner. The subjective judgements are completely backed by the instrumental results. Consequently, a prototype reader was developed; first tests were successful. The value of dose level indicators and the implications of its use for food or quarantine inspection depends on a link between dose measured (indicated) at the position of such indicator and the characteristic parameters of the frequency distribution of dose throughout the product load i.e. a box or a container or a whole batch of multiple units. Therefore, studies into variability and statistical properties of dose distributions obtained under a range of commercial situations were undertaken. Gamma processing at a commercial multipurpose contract irradiator, electron processing and bremsstrahlung applications at a largescale research facility were included; products were apples, potatoes, wheat, maize, pistachio. Studies revealed that still more detailed information on irradiation geometries are needed in order to render meaningful information from dose label indicators. (author)

  9. Occurrence and distribution study of residues from pesticides applied under controlled conditions in the field during rice processing.

    Science.gov (United States)

    Pareja, Lucía; Colazzo, Marcos; Pérez-Parada, Andrés; Besil, Natalia; Heinzen, Horacio; Böcking, Bernardo; Cesio, Verónica; Fernández-Alba, Amadeo R

    2012-05-09

    The results of an experiment to study the occurrence and distribution of pesticide residues during rice cropping and processing are reported. Four herbicides, nine fungicides, and two insecticides (azoxystrobin, byspiribac-sodium, carbendazim, clomazone, difenoconazole, epoxiconazole, isoprothiolane, kresoxim-methyl, propanil, quinclorac, tebuconazole, thiamethoxam, tricyclazole, trifloxystrobin, λ-cyhalotrin) were applied to an isolated rice-crop plot under controlled conditions, during the 2009-2010 cropping season in Uruguay. Paddy rice was harvested and industrially processed to brown rice, white rice, and rice bran, which were analyzed for pesticide residues using the original QuEChERS methodology and its citrate variation by LC-MS/MS and GC-MS. The distribution of pesticide residues was uneven among the different matrices. Ten different pesticide residues were found in paddy rice, seven in brown rice, and eight in rice bran. The highest concentrations were detected in paddy rice. These results provide information regarding the fate of pesticides in the rice food chain and its safety for consumers.

  10. Field signatures of non-Fickian transport processes: transit time distributions, spatial correlations, reversibility and hydrogeophysical imaging

    Science.gov (United States)

    Le Borgne, T.; Kang, P. K.; Guihéneuf, N.; Shakas, A.; Bour, O.; Linde, N.; Dentz, M.

    2015-12-01

    Non-Fickian transport phenomena are observed in a wide range of scales across hydrological systems. They are generally manifested by a broad range of transit time distributions, as measured for instance in tracer breakthrough curves. However, similar transit time distributions may be caused by different origins, including broad velocity distributions, flow channeling or diffusive mass transfer [1,2]. The identification of these processes is critical for defining relevant transport models. How can we distinguish the different origins of non-Fickian transport in the field? In this presentation, we will review recent experimental developments to decipher the different causes of anomalous transport, based on tracer tests performed at different scales in cross borehole and push pull conditions, and time lapse hydrogeophysical imaging of tracer motion [3,4]. References:[1] de Anna-, P., T. Le Borgne, M. Dentz, A. M. Tartakovsky, D. Bolster, P. Davy (2013) Flow Intermittency, Dispersion and Correlated Continuous Time Random Walks in Porous Media, Phys. Rev. Lett., 110, 184502 [2] Le Borgne T., Dentz M., and Carrera J. (2008) Lagrangian Statistical Model for Transport in Highly Heterogeneous Velocity Fields. Phys. Rev. Lett. 101, 090601 [3] Kang, P. K., T. Le Borgne, M. Dentz, O. Bour, and R. Juanes (2015), Impact of velocity correlation and distribution on transport in fractured media : Field evidence and theoretical model, Water Resour. Res., 51, 940-959 [4] Dorn C., Linde N., Le Borgne T., O. Bour and L. Baron (2011) Single-hole GPR reflection imaging of solute transport in a granitic aquifer Geophys. Res. Lett. Vol.38, L08401

  11. Effects of Wegener-Bergeron-Findeisen Process on Global Black Carbon Distribution

    Science.gov (United States)

    Qi, L.

    2016-12-01

    In mixed-phase clouds, the Wegener-Bergeron-Findeisen (WBF) process (ice crystals may grow while water drops evaporate, thereby releasing black carbon (BC) particles into the interstitial air) slows down wet scavenging of BC. Rimming (snowflakes fall and collect cloud water drops and the BC in them along their pathways), in contrast, results in more efficient wet scavenging. We systematically investigate the effects of WBF on BC scavenging efficiency, surface BCair, deposition flux, concentration in snow, and washout ratio using a global 3D chemical transport model. We differentiate riming- vs WBF-dominated in-cloud scavenging based on liquid water content and temperature. Specifically, we relate WBF to either temperature or ice mass fraction in mixed-phase clouds. We find that at Jungfraujoch, Switzerland and Abisko, Sweden, where WBF dominates, the discrepancies of simulated BC scavenging efficiency and washout ratio are significantly reduced (from a factor of 3 to 10% and from a factor of 4-5 to a factor of two). However, at Zeppelin, Norway, where riming dominates, simulation of BC scavenging efficiency, BCair, and washout ratio become worse (relative to observations) when WBF is included. There is thus an urgent need for extensive observations to distinguish and characterize riming- versus WBF-dominated aerosol scavenging in mixed-phase clouds and the associated BC scavenging efficiency. We find the reduction resulting from WBF to global BC scavenging efficiency varies substantially, from 8% in the tropics to 76% in the Arctic. The resulting annual mean BCair increases by up to 156% at high altitudes and at northern high latitudes. Overall, WBF halves the model-observation discrepancy (from -65% to -30%) of BCair across North America, Europe, China and the Arctic. Globally WBF increases BC burden from 0.22 to 0.29-0.35 mg m-2 yr-1, which partially explains the gap between observed and previous model simulated BC burdens over land (Bond et al., 2013). In

  12. An overview of current applications, challenges, and future trends in distributed process-based models in hydrology

    Science.gov (United States)

    Fatichi, Simone; Vivoni, Enrique R.; Odgen, Fred L; Ivanov, Valeriy Y; Mirus, Benjamin B.; Gochis, David; Downer, Charles W; Camporese, Matteo; Davison, Jason H; Ebel, Brian A.; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard G.; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David

    2016-01-01

    Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth’s system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.

  13. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of Ath...

  14. An overview of current applications, challenges, and future trends in distributed process-based models in hydrology

    Science.gov (United States)

    Fatichi, Simone; Vivoni, Enrique R.; Ogden, Fred L.; Ivanov, Valeriy Y.; Mirus, Benjamin; Gochis, David; Downer, Charles W.; Camporese, Matteo; Davison, Jason H.; Ebel, Brian; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David

    2016-06-01

    Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth's system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.

  15. ARM-ACME V: ARM Airborne Carbon Measurements V on the North Slope of Alaska Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Biraud, Sebastien C [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Atmospheric temperatures are warming faster in the Arctic than predicted by climate models. The impact of this warming on permafrost degradation is not well understood, but it is projected to increase carbon decomposition and greenhouse gas production (CO2 and/or CH4) by arctic ecosystems. Airborne observations of atmospheric trace gases, aerosols and cloud properties in North Slopes of Alaska (NSA) are improving our understanding of global climate, with the goal of reducing the uncertainty in global and regional climate simulations and projections. From June 1 through September 15, 2015, AAF deployed the G1 research aircraft and flew over the North Slope of Alaska (38 flights, 140 science flight hours), with occasional vertical profiling over Prudhoe Bay, Oliktok point, Barrow, Atqasuk, Ivotuk, and Toolik Lake. The aircraft payload included Picarro and Los Gatos Research (LGR) analyzers for continuous measurements of CO2, CH4, H2O, and CO and N2O mixing ratios, and a 12-flask sampler for analysis of carbon cycle gases (CO2, CO, CH4, N2O, 13CO2, and trace hydrocarbon species). The aircraft payload also include measurements of aerosol properties (number size distribution, total number concentration, absorption, and scattering), cloud properties (droplet and ice size information), atmospheric thermodynamic state, and solar/infrared radiation.

  16. Effects of the Wegener-Bergeron-Findeisen process on global black carbon distribution

    Science.gov (United States)

    Qi, Ling; Li, Qinbin; He, Cenlin; Wang, Xin; Huang, Jianping

    2017-06-01

    We systematically investigate the effects of Wegener-Bergeron-Findeisen process (hereafter WBF) on black carbon (BC) scavenging efficiency, surface BCair, deposition flux, concentration in snow (BCsnow, ng g-1), and washout ratio using a global 3-D chemical transport model (GEOS-Chem). We differentiate riming- versus WBF-dominated in-cloud scavenging based on liquid water content (LWC) and temperature. Specifically, we implement an implied WBF parameterization using either temperature or ice mass fraction (IMF) in mixed-phase clouds based on field measurements. We find that at Jungfraujoch, Switzerland, and Abisko, Sweden, where WBF dominates in-cloud scavenging, including the WBF effect strongly reduces the discrepancies of simulated BC scavenging efficiency and washout ratio against observations (from a factor of 3 to 10 % and from a factor of 4-5 to a factor of 2). However, at Zeppelin, Norway, where riming dominates, simulation of BC scavenging efficiency, BCair, and washout ratio become worse (relative to observations) when WBF is included. There is thus an urgent need for extensive observations to distinguish and characterize riming- versus WBF-dominated aerosol scavenging in mixed-phase clouds and the associated BC scavenging efficiency. Our model results show that including the WBF effect lowers global BC scavenging efficiency, with a higher reduction at higher latitudes (8 % in the tropics and up to 76 % in the Arctic). The resulting annual mean BCair increases by up to 156 % at high altitudes and at northern high latitudes because of lower temperature and higher IMF. Overall, WBF halves the model-observation discrepancy (from -65 to -30 %) of BCair across North America, Europe, China and the Arctic. Globally WBF increases BC burden from 0.22 to 0.29-0.35 mg m-2 yr-1, which partially explains the gap between observed and previous model-simulated BC burdens over land. In addition, WBF significantly increases BC lifetime from 5.7 to ˜ 8 days. Additionally

  17. An advanced process-based distributed model for the investigation of rainfall-induced landslides: The effect of process representation and boundary conditions

    Science.gov (United States)

    Anagnostopoulos, Grigorios G.; Fatichi, Simone; Burlando, Paolo

    2015-09-01

    Extreme rainfall events are the major driver of shallow landslide occurrences in mountainous and steep terrain regions around the world. Subsurface hydrology has a dominant role on the initiation of rainfall-induced shallow landslides, since changes in the soil water content affect significantly the soil shear strength. Rainfall infiltration produces an increase of soil water potential, which is followed by a rapid drop in apparent cohesion. Especially on steep slopes of shallow soils, this loss of shear strength can lead to failure even in unsaturated conditions before positive water pressures are developed. We present HYDROlisthisis, a process-based model, fully distributed in space with fine time resolution, in order to investigate the interactions between surface and subsurface hydrology and shallow landslides initiation. Fundamental elements of the approach are the dependence of shear strength on the three-dimensional (3-D) field of soil water potential, as well as the temporal evolution of soil water potential during the wetting and drying phases. Specifically, 3-D variably saturated flow conditions, including soil hydraulic hysteresis and preferential flow phenomena, are simulated for the subsurface flow, coupled with a surface runoff routine based on the kinematic wave approximation. The geotechnical component of the model is based on a multidimensional limit equilibrium analysis, which takes into account the basic principles of unsaturated soil mechanics. A series of numerical simulations were carried out with various boundary conditions and using different hydrological and geotechnical components. Boundary conditions in terms of distributed soil depth were generated using both empirical and process-based models. The effect of including preferential flow and soil hydraulic hysteresis was tested together with the replacement of the infinite slope assumption with the multidimensional limit equilibrium analysis. The results show that boundary conditions play

  18. Spatial Distribution Analysis of Soil Properties in Varzaneh Region of Isfahan Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    F. Mahmoodi

    2016-02-01

    annual evaporation rate is 3265 mm. In this study, image processing techniquess including band combinations, Principal Component Analysis (PC1, PC2 and PC3, and classification were applied to a TM image to map different soil properties. In order to prepare the satellite image, geometric correction was performed. A 1:25,000 map (UTM 39 was used as a base to georegister the Landsat image. 40 Ground Control Points (GCPs were selected throughout the map and image. Road intersections or other man-made features were appropriate targets for this purpose. The raw image was transformed to the georectified image using a first order polynomial, and then resampled using the nearest neighbour method to preserve radiometry. The final Root Mean Square (RMS error for the selected points was 0.3 pixels. To establish relationships between image and field data, stratified random sampling techniques were used to collect 53 soil samples at the GPS (Global Positioning System points. The continuous map of soil properties was achieved using simple and multiple linear regression models by averaging 9 image pixels around sampling sites. Different image spectral indices were used as independent variables and the dependent variables were field- based data. Results and Discussion: The results of multiple regression analysis showed that the strongest relationships was between sandy soil and TM bands 1, 2, 3, 4, and 5, explaining up to 83% of variation in this component. The weakest relationship was found between CaCo3 and 3, 5, and 7 TM bands. In some cases, the multiple regressions was not an appropriate predicting model of soil properties, therefore, the TM and PC bands that had the highest relationship with field data (confidence level, 99% based on simple regression were classified by the maximum likelihood algorithm. According to error matrix, the overall accuracy of classified maps was between 85 and 93% for chlorine (Cl and silt componets, repectively. Conclusions: The results indicated that

  19. Using the Analytic Network Process (ANP) to assess the distribution of pharmaceuticals in hospitals – a comparative case study of a Danish and American hospital

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Sørup, Christian Michel; Jacobsen, Peter

    2016-01-01

    Pharmaceuticals are a vital part of patient treatment and the timely delivery of pharmaceuticals to patients is therefore important. Hospitals are complex systems that provide a challenging environment for decision making. Implementing process changes and technologies to improve the pharmaceutical...... distribution process can therefore be a complex and challenging undertaking. A comparative case study was conducted benchmarking the pharmaceutical distribution process at a Danish and US hospital to identify best practices. Using the ANP method, taking tangible and intangible aspects into consideration......, the most suitable solution for pharmaceutical distribution reflecting management preferences was identified....

  20. Study of Electric Explosion of Flat Micron-Thick Foils at Current Densities of (5-50)×108 A/cm2

    Science.gov (United States)

    Shelkovenko, T. A.; Pikuz, S. A.; Tilikin, I. N.; Mingaleev, A. R.; Atoyan, L.; Hammer, D. A.

    2018-02-01

    Electric explosions of flat Al, Ti, Ni, Cu, and Ta foils with thicknesses of 1-16 μm, widths of 1-8 mm, and lengths of 5-11 mm were studied experimentally on the BIN, XP, and COBRA high-current generators at currents of 40-1000 kA and current densities of (5-50) × 108 A/cm2. The images of the exploded foils were taken at different angles to the foil surface by using point projection radiography with an X-pinch hot spot as the radiation source, the spatial resolution and exposure time being 3 μm and 50 ps, respectively, as well by the laser probing method with a spatial resolution of 20 μm and an exposure time of 180 ps. In the course of foil explosion, rapidly expanding objects resembling the core and corona of an exploded wire were observed. It is shown that the core of the exploded foil has a complicated time-varying structure.

  1. Modeling the effect of urban infrastructure on hydrologic processes within i-Tree Hydro, a statistically and spatially distributed model

    Science.gov (United States)

    Taggart, T. P.; Endreny, T. A.; Nowak, D.

    2014-12-01

    Gray and green infrastructure in urban environments alters many natural hydrologic processes, creating an urban water balance unique to the developed environment. A common way to assess the consequences of impervious cover and grey infrastructure is by measuring runoff hydrographs. This focus on the watershed outlet masks the spatial variation of hydrologic process alterations across the urban environment in response to localized landscape characteristics. We attempt to represent this spatial variation in the urban environment using the statistically and spatially distributed i-Tree Hydro model, a scoping level urban forest effects water balance model. i-Tree Hydro has undergone expansion and modification to include the effect of green infrastructure processes, road network attributes, and urban pipe system leakages. These additions to the model are intended to increase the understanding of the altered urban hydrologic cycle by examining the effects of the location of these structures on the water balance. Specifically, the effect of these additional structures and functions on the spatially varying properties of interception, soil moisture and runoff generation. Differences in predicted properties and optimized parameter sets between the two models are examined and related to the recent landscape modifications. Datasets used in this study consist of watersheds and sewersheds within the Syracuse, NY metropolitan area, an urban area that has integrated green and gray infrastructure practices to alleviate stormwater problems.

  2. Distribution and Characteristics of Boulder Halos at High Latitudes on Mars: Ground Ice and Surface Processes Drive Surface Reworking

    Science.gov (United States)

    Levy, J. S.; Fassett, C. I.; Rader, L. X.; King, I. R.; Chaffey, P. M.; Wagoner, C. M.; Hanlon, A. E.; Watters, J. L.; Kreslavsky, M. A.; Holt, J. W.; Russell, A. T.; Dyar, M. D.

    2018-02-01

    Boulder halos are circular arrangements of clasts present at Martian middle to high latitudes. Boulder halos are thought to result from impacts into a boulder-poor surficial unit that is rich in ground ice and/or sediments and that is underlain by a competent substrate. In this model, boulders are excavated by impacts and remain at the surface as the crater degrades. To determine the distribution of boulder halos and to evaluate mechanisms for their formation, we searched for boulder halos over 4,188 High Resolution Imaging Science Experiment images located between 50-80° north and 50-80° south latitude. We evaluate geological and climatological parameters at halo sites. Boulder halos are about three times more common in the northern hemisphere than in the southern hemisphere (19% versus 6% of images) and have size-frequency distributions suggesting recent Amazonian formation (tens to hundreds of millions of years). In the north, boulder halo sites are characterized by abundant shallow subsurface ice and high thermal inertia. Spatial patterns of halo distribution indicate that excavation of boulders from beneath nonboulder-bearing substrates is necessary for the formation of boulder halos, but that alone is not sufficient. Rather, surface processes either promote boulder halo preservation in the north or destroy boulder halos in the south. Notably, boulder halos predate the most recent period of near-surface ice emplacement on Mars and persist at the surface atop mobile regolith. The lifetime of observed boulders at the Martian surface is greater than the lifetime of the craters that excavated them. Finally, larger minimum boulder halo sizes in the north indicate thicker icy soil layers on average throughout climate variations driven by spin/orbit changes during the last tens to hundreds of millions of years.

  3. Crevasse splay processes and deposits in an ancient distributive fluvial system: The lower Beaufort Group, South Africa

    Science.gov (United States)

    Gulliford, Alice R.; Flint, Stephen S.; Hodgson, David M.

    2017-08-01

    Up to 12% of the mud-prone, ephemeral distributive fluvial system stratigraphy in the Permo-Triassic lower Beaufort Group, South Africa, comprises tabular fine-grained sandstone to coarse-grained siltstone bodies, which are interpreted as proximal to distal crevasse splay deposits. Crevasse splay sandstones predominantly exhibit ripple to climbing ripple cross-lamination, with some structureless and planar laminated beds. A hierarchical architectural scheme is adopted, in which 1 m thick crevasse splay elements extend for tens to several hundreds of meters laterally, and stack with other splay elements to form crevasse splay sets up to 4 m thick and several kilometers in width and length. Paleosols and nodular horizons developed during periods, or in areas, of reduced overbank flooding are used to subdivide the stratigraphy, separating crevasse splay sets. Deposits from crevasse splays differ from frontal splays as their proximal deposits are much thinner and narrower, with paleocurrents oblique to the main paleochannel. In order for crevasse splay sets to develop, the parent channel belt and the location where crevasse splays form must stay relatively fixed during a period of multiple flood events. Beaufort Group splays have similar geometries to those of contemporary perennial rivers but exhibit more lateral variability in facies, which is interpreted to be the result of more extreme fluctuations in discharge regime. Sharp-based crevasse splay packages are associated with channel avulsion, but most are characterized by a gradual coarsening upward, interpreted to represent progradation. The dominance of progradational splays beneath channel belt deposits may be more characteristic of progradational stratigraphy in a distributive fluvial system rather than dominated by avulsion processes in a trunk river system. This stratigraphic motif may therefore be an additional criterion for recognition of distributive fluvial systems in the ancient record.

  4. Angular distributions in the lepton pair production. [Review, Drell-Yan process, quantum chromodynamics, Compton diagrams, 200 GeV/c

    Energy Technology Data Exchange (ETDEWEB)

    Stroynowski, R.

    1979-10-01

    The angular distributions of high mass lepton pairs are reviewed. It is argued that the detailed study of polar distributions provide the evidence for the substantial contributions to the Drell-Yan process from the higher order QCD effects. It is also pointed out that the first order QCD Compton diagrams predict nontrivial azimuthal dependence which could be measured experimentally. 13 references.

  5. Gypsum and organic matter distribution in a mixed construction and demolition waste sorting process and their possible removal from outputs.

    Science.gov (United States)

    Montero, A; Tojo, Y; Matsuo, T; Matsuto, T; Yamada, M; Asakura, H; Ono, Y

    2010-03-15

    With insufficient source separation, construction and demolition (C&D) waste becomes a mixed material that is difficult to recycle. Treatment of mixed C&D waste generates residue that contains gypsum and organic matter and poses a risk of H(2)S formation in landfills. Therefore, removing gypsum and organic matter from the residue is vital. This study investigated the distribution of gypsum and organic matter in a sorting process. Heavy liquid separation was used to determine the density ranges in which gypsum and organic matter were most concentrated. The fine residue that was separated before shredding accounted for 27.9% of the waste mass and contained the greatest quantity of gypsum; therefore, most of the gypsum (52.4%) was distributed in this fraction. When this fine fraction was subjected to heavy liquid separation, 93% of the gypsum was concentrated in the density range of 1.59-2.28, which contained 24% of the total waste mass. Therefore, removing this density range after segregating fine particles should reduce the amount of gypsum sent to landfills. Organic matter tends to float as density increases; nevertheless, separation at 1.0 density could be more efficient. (c) 2009 Elsevier B.V. All rights reserved.

  6. Electric field distribution and the charge collection process in not-ideally compensated coaxial Ge(Li) detectors

    International Nuclear Information System (INIS)

    Szymczyk, W.M.; Moszynski, M.

    1978-01-01

    The not-ideally compensated space charge of donors and acceptors in lithium-drifted coaxial Ge(Li) detectors can modify the electric field distribution in the detector depleted volume, and influence in this way the charge collection process. Observations of the capacity, the time of charge collection (transit time), and the relative efficiency characteristics vs. detector bias voltage, showed that in conventional pin + coaaxial structures an undercompensation near the inner p-type core was typical. It was found that such an undercompensation had negligible consequences from the charge collection point of view. However, one case was observed where the modification near the outer electrode was present. In that case the charge pulses with remarkably increased rise-times were observed, as compared to the predictions based on the assumption of the classical, E proportional to 1/r, electric field distribution. The pulses expected from not-ideally compensated detectors were calculated using the Variable Velocity Approximation. The pulses expected from and much better agreement with the observed pulses was obtained. The calculated and observed dependencies of the charge transit times vs. reciprocal of the detector bias voltage exhibited, in the absence of the outer-electrode modification, linear parts. Measurement of their slopes permitted to find experimentally the depletion layer width provided the charge carriers mobility value was known, or vice versa. (Auth.)

  7. Migration, speciation and distribution of heavy metals in an oil-polluted soil affected by crude oil extraction processes.

    Science.gov (United States)

    Fu, Xiaowen; Cui, Zhaojie; Zang, Guolong

    2014-07-01

    Heavy metals are among the major pollutants in the worldwide soil environment. In oilfields, the crude oil extraction process results in the simultaneous contamination of the soil with petroleum and heavy metals. In this work, we investigated the influence of oil extraction on the migration, speciation, and temporal distribution of heavy metals (Cu, Zn, Pb, Cd, Cr, Mn, Ni, V, and Mn) in soils of an oil region of Shengli Oilfield, China. The results showed that oil-polluted soils were contaminated with Cu, Zn, Cd and Ni, with mean concentrations of 27.63, 67.12, 0.185 and 33.80 mg kg(-1), respectively (greater than the background values of local surface soils). Compared with the control profile, the vertical distributions of Cu, Zn, Pb, Cd, Ni, and V were affected in oil-polluted soils, particularly those of Cd and Ni. The concentrations of Zn, Cd, Ni, V, and Mn in oil-polluted soils increased with the duration of oil well development, which indicated the levels of these metals in the oil field were enhanced by human activities. Fractionation analysis revealed that the mobility potential of heavy metals in oil polluted soil decreased in the sequence Cd > Mn > Zn > Ni > Pb > Cu > Cr > V. The most important proportion of Cd is ion exchangeable and acid soluble, which indicates that Cd is the most labile, available, and harmful heavy metal among the elements that damage the soil environment in oil-polluted soil.

  8. PECULIARITIES OF PROCESSES OF CARBIDE FORMATION AND DISTRIBUTION OF Cr, Mn AND Ni IN WHITE CAST IRONS

    Directory of Open Access Journals (Sweden)

    V. V. Netrebko

    2015-01-01

    Full Text Available During crystallization of castings from white cast iron, carbides Me3С, Me7С3, Me23С6 were formed depending on chromium and carbon content. Impeded chromium diffusion caused formation of thermodynamically unstable and non-uniform phases (carbides. During heat treatment process stable equilibrium phases were formed as a result of rearrangement of the carbides’ crystal lattice, replacement of iron, manganese, nickel and silicon atoms by chromium atoms. The allocated atoms concentrated, forming inclusions of austenite inside the carbides. Holding during 9 hours at 720 °С and annealing decreased the non-uniformity of chromium distribution in the metallic base of cast iron containing 11,5 % Cr, and increased it in the cast iron containing 21,5 % Cr. Holding during 4.5 hours at 1050 °С and normalization decreased the non-uniformity of chromium distribution in the metallic base of cast iron containing 21,5 % Cr, and increased it in cast iron containing 11,5 % Cr.

  9. Mesoscale Raised Rim Depressions (MRRDs) on Earth: A Review of the Characteristics, Processes, and Spatial Distributions of Analogs for Mars

    Science.gov (United States)

    Burr, Devon M.; Bruno, Barbara C.; Lanagan, Peter D.; Glaze, Lori; Jaeger, Windy L.; Soare, Richard J.; Tseung, Jean-Michel Wan Bun; Skinner, James A. Jr.; Baloga, Stephen M.

    2008-01-01

    Fields of mesoscale raised rim depressions (MRRDs) of various origins are found on Earth and Mars. Examples include rootless cones, mud volcanoes, collapsed pingos, rimmed kettle holes, and basaltic ring structures. Correct identification of MRRDs on Mars is valuable because different MRRD types have different geologic and/or climatic implications and are often associated with volcanism and/or water, which may provide locales for biotic or prebiotic activity. In order to facilitate correct identification of fields of MRRDs on Mars and their implications, this work provides a review of common terrestrial MRRD types that occur in fields. In this review, MRRDs by formation mechanism, including hydrovolcanic (phreatomagmatic cones, basaltic ring structures), sedimentological (mud volcanoes), and ice-related (pingos, volatile ice-block forms) mechanisms. For each broad mechanism, we present a comparative synopsis of (i) morphology and observations, (ii) physical formation processes, and (iii) published hypothesized locations on Mars. Because the morphology for MRRDs may be ambiguous, an additional tool is provided for distinguishing fields of MRRDs by origin on Mars, namely, spatial distribution analyses for MRRDs within fields on Earth. We find that MRRDs have both distinguishing and similar characteristics, and observation that applies both to their mesoscale morphology and to their spatial distribution statistics. Thus, this review provides tools for distinguishing between various MRRDs, while highlighting the utility of the multiple working hypotheses approach.

  10. Size distribution of agglomerates of milk powder in wet granulation process in a vibro-fluidized bed

    Directory of Open Access Journals (Sweden)

    M. Banjac

    2009-09-01

    Full Text Available Results of experiments on the influence of technological parameters (intensity of vibration, granulation of the liquid feed, temperature of fluidization agent on the change of size distribution, as well as mass mean diameter of the milk powder particles subjected to the wet granulation process (agglomeration in a vibro-fluidized bed granulator are shown in this paper. Using water as a granulation liquid and air as a fluidization agent, it was found that mass mean diameter increases with increase of water feed, intensity of vibration, and decrease of air temperature. Increasing the intensity of vibration and decreasing the air temperature, primarily induces the increase of the dimensions of the initial nuclei. This can be explained on the basis of different influences that these changes (velocity of particle motion, intensity of particle collision, drying rate have on the coalescence of particles with smaller and/or bigger dimensions.

  11. The impact of randomness on the distribution of wealth: Some economic aspects of the Wright-Fisher diffusion process

    Science.gov (United States)

    Bouleau, Nicolas; Chorro, Christophe

    2017-08-01

    In this paper we consider some elementary and fair zero-sum games of chance in order to study the impact of random effects on the wealth distribution of N interacting players. Even if an exhaustive analytical study of such games between many players may be tricky, numerical experiments highlight interesting asymptotic properties. In particular, we emphasize that randomness plays a key role in concentrating wealth in the extreme, in the hands of a single player. From a mathematical perspective, we interestingly adopt some diffusion limits for small and high-frequency transactions which are otherwise extensively used in population genetics. Finally, the impact of small tax rates on the preceding dynamics is discussed for several regulation mechanisms. We show that taxation of income is not sufficient to overcome this extreme concentration process in contrast to the uniform taxation of capital which stabilizes the economy and prevents agents from being ruined.

  12. IMAGING OF FLUOROPHORES IN CHROMATOGRAPHIC BEADS, RECONSTRUCTION OF RADIAL DENSITY DISTRIBUTIONS AND CHARACTERISATION OF PROTEIN UPTAKING PROCESSES

    Directory of Open Access Journals (Sweden)

    Bernd Stanislawski

    2010-11-01

    Full Text Available A new adjustment calculus is presented to determine the true intraparticle distribution of bound protein within chromatographic beads from confocal fluorescence slice series. The calculus does not require knowledge about optical properties of different chromatographic materials like refractive index and turbidity, but it depends on a parameter which can be adjusted interactively. The algorithm is of complexity O(n where n is the pixel number. From the reconstructed data we compute the parameters of the protein uptaking process using a model-based approach. It is demonstrated that the protein uptaking rates of the beads strongly dependent on the conditions of the fluid phase influencing the strength of protein surface interaction.

  13. Integration of the geo-processing systems with network management of the power distribution system; Integracao de sistemas de geoprocessamento com sistemas de gerencia de redes de distribuicao

    Energy Technology Data Exchange (ETDEWEB)

    Correa, Geraldo Cezar; Marques, Ary Luiz [Companhia Paranaense de Energia (COPEL), Curitiba, PR (Brazil)

    1994-12-31

    This work presents the foreseen difficulties, at level of programs applications, for the integration of the geo-processing systems with network management of the power distribution system, at the moment, implanted in the COPEL electric company 6 figs.

  14. The distribution of environmental contaminants and pharmaceuticals among skim milk, milk fat, curd, whey, and milk protein fractions through milk processing

    Science.gov (United States)

    Twenty-seven environmental contaminants and pharmaceuticals encompassing a wide range of physicochemical properties were utilized to determine the effects of milk processing on xenobiotic distribution among milk fractions. Target compounds included radiolabeled antibiotics [ciprofloxacin (CIPR), cl...

  15. Categorization of Survey Text Utilizing Natural Language Processing and Demographic Filtering

    Science.gov (United States)

    2017-09-01

    spouse family mother family mom family dad family father family kid family kids family job career employment career & and sailor . school...analysis: A practical introduction to information retrieval and text mining. New York, NY: ACM Books . 67 INITIAL DISTRIBUTION LIST 1. Defense

  16. Glyphosate distribution in loess soils as a result of dynamic sediment transport processes during a simulated rainstorm

    Science.gov (United States)

    Commelin, Meindert; Martins Bento, Celia; Baartman, Jantiene; Geissen, Violette

    2016-04-01

    Glyphosate is one of the most widely used herbicides in the world. The wide and extensive use of glyphosate makes it important to be certain about the safety of glyphosate to off-target environments and organisms. This research aims to create more detailed insight into the distribution processes of glyphosate, and the effect that dynamic sediment transport processes have on this distribution, during water erosion in agricultural fields. Glyphosate distribution characteristics are investigated for two different soil surfaces: a smooth surface, and a surface with seeding lines on the contour. The capacity to transport glyphosate for different sediment groups was investigated. These groups were water-eroded sediment and sedimentation areas found on the plot surface. The contribution of particle bonded and dissolved transport to total overland transportation of glyphosate was analysed with a mass balance study. The experiment was conducted in the Wageningen UR rainfall simulator. Plots of 0.5m2 were used, with a 5% slope, and a total of six experimental simulations were done. A rainfall event with an intensity of 30mm/h was simulated, applied in four showers of 15 minutes each with 30 minutes pause in between. Glyphosate (16mg/kg) was applied on the top 20cm of each plot, and in the downstream part, soil samples were taken. Glyphosate analysis was done using HPLC-MS/MS (High Performance Liquid Chromatography tandem Mass Spectrometry). Besides that, photo analysis with eCognition was used to derive the soil surface per sediment group. The results show that particle bonded transport of glyphosate contributes significantly (for at least 25%) to glyphosate transport during a rainstorm event. Particle size and organic matter have a large influence on the mobility of glyphosate and on the transported quantity to off-target areas. Moreover, seeding lines on the soil surface decreased total overland transport, both of sediment and glyphosate. Taking this into account, plots

  17. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  18. Temperature dependency of the Ga/In distribution in Cu(In,Ga)Se2 absorbers in high temperature processes

    Science.gov (United States)

    Mueller, B. J.; Demes, T.; Lill, P. C.; Haug, V.; Hergert, F.; Zweigart, S.; Herr, U.

    2016-05-01

    The current article reports about the influence of temperature and glass substrate on Ga/In interdiffusion and chalcopyrite phase formation in the stacked elemental layer process. According to the Shockley-Queisser limit the optimum for single junction devices is near 1.4 eV, which is strongly coupled on the Ga/(Ga+In) ratio of Cu(In,Ga)Se2 thin film solar cells. To increase the Ga content in the active region of the Cu(In,Ga)Se2 a 70:30 CuGa alloy target is used. An increase of the selenization temperature leads to a more homogeneous Ga/In distribution and a less pronounced Ga agglomeration at the back contact. The Ga/In interdiffusion rates for different selenization temperatures and substrates were estimated with the model of a two layer system. At the highest selenization temperature used an absorber band gap of 1.12 eV was realized, which is similar to typical values of absorbers produced during the co-evaporation process. The Na diffusion into the Cu(In,Ga)Se2 is weakly temperature dependent but strongly influenced by the choice of the glass substrate composition.

  19. Distribution of stable traps for thermoluminescent processes in the phosphor SrAl2O4: Eu2+, Dy3+

    International Nuclear Information System (INIS)

    Pedroza M, M.; Castaneda, B.; Arellano T, O.; Melendrez, R.; Barboza F, M.

    2007-01-01

    Full text: The phosphor of persistent luminescence (PLUM) SrAl 2 O 4 :Eu 2+ , Dy 3+ exhibits one thermoluminescence curve after exposing it to UV radiation. The curve is made up of a wide band with a maximum around 455 K. Starting from the experimental deconvolution method proposed by McKeever, it was solved the number of peaks in the TL curve and it was analyzed the position of each TL peak regarding to the cut temperature (T stop ). In this analysis five maximum TL peaks were observed (at the diagram T stop vs T max ) around the 319, 425, 457, 488 and 515 K. Also, its were also found two regions that correspond to an overlap of stable traps, the first one in the region of the 380 K at 415 K and the second of the 430 to 455 K. The existence of a distribution of stable traps can be evaluated from the curve T stop vs T max where this distribution of stable traps is presented as a monotonous lineal increase with the temperature, because the TL independent processes appear like horizontal lines exactly in the specific temperatures (319, 425, 457, 488 and 515 K) where its are liberated most of the trapped charges. Using the preheating method and initial increase for the peak in 455 K the trap depths are determined, being obtained the following values of the activation energy 0.28, 0.67, 1, 1.5 and 1.62 eV. An arrangement of stable traps plays a decisive role in the emission of the persistent luminescence. Likewise, it was determined that all the thermoluminescent processes were characterized by a re trapping of the charge, reason by which these processes followed a second order kinetics. The TL peak of low temperature 319 K is related with those electronic traps that the PLUM takes place in SrAl 2 O 4 : Eu 2+ and with the same recombination centers. The PLUM emissions and the TL are centered around 510 nm attributed to the electronic transition 4f 6 5d 1 →4f 7 corresponding to the Eu 2+ ion. In this work, it is explained the participation or contribution of the

  20. Distribution of aliphatic amines in CO, CV, and CK carbonaceous chondrites and relation to mineralogy and processing history

    Science.gov (United States)

    Aponte, José C.; Abreu, Neyda M.; Glavin, Daniel P.; Dworkin, Jason P.; Elsila, Jamie E.

    2017-12-01

    The analysis of water-soluble organic compounds in meteorites provides valuable insights into the prebiotic synthesis of organic matter and the processes that occurred during the formation of the solar system. We investigated the concentration of aliphatic monoamines present in hot acid water extracts of the unaltered Antarctic carbonaceous chondrites, Dominion Range (DOM) 08006 (CO3) and Miller Range (MIL) 05013 (CO3), and the thermally altered meteorites, Allende (CV3), LAP 02206 (CV3), GRA 06101 (CV3), Allan Hills (ALH) 85002 (CK4), and EET 92002 (CK5). We have also reviewed and assessed the petrologic characteristics of the meteorites studied here to evaluate the effects of asteroidal processing on the abundance and molecular distributions of monoamines. The CO3, CV3, CK4, and CK5 meteorites studied here contain total concentrations of amines ranging from 1.2 to 4.0 nmol g-1 of meteorite; these amounts are 1-3 orders of magnitude below those observed in carbonaceous chondrites from the CI, CM, and CR groups. The low-amine abundances for CV and CK chondrites may be related to their extensive degree of thermal metamorphism and/or to their low original amine content. Although the CO3 meteorites, DOM 08006 and MIL 05013, do not show signs of thermal and aqueous alteration, their monoamine contents are comparable to those observed in moderately/extensively thermally altered CV3, CK4, and CK5 carbonaceous chondrites. The low content of monoamines in pristine CO carbonaceous chondrites suggests that the initial amounts, and not asteroidal processes, play a dominant role in the content of monoamines in carbonaceous chondrites. The primary monoamines, methylamine, ethylamine, and n-propylamine constitute the most abundant amines in the CO3, CV3, CK4, and CK5 meteorites studied here. Contrary to the predominance of n-ω-amino acid isomers in CO3 and thermally altered meteorites, there appears to be no preference for the larger n-amines.